In the first part of this series on the private cloud network, we examine whether the physical network is hostile to the private cloud. In part two, we look at how the software-defined networking control plane can integrate virtual and physical switching layers.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The private cloud network must be far more flexible and dynamic than traditional networks. Essentially, private clouds require a network in which services are abstracted away from the underlying virtual and physical infrastructure using Software-Defined Networking (SDN), ultimately enabling Network as a Service (NaaS).
However, aside from proprietary implementations, we are a far way off from NaaS/SDN in a multi-vendor, multi-hypervisor private cloud. In the meantime, there are some developments in virtual networking that we can use on the way to software-defined private cloud networks.
Private cloud networks: Managing VLANs across the virtual and physical
As a starting point for this discussion, the private cloud network has two switching layers: the virtual switching layer and the physical switching layer. The physical switch is the standard Ethernet switch we’ve been using for more than 20 years. The virtual switch is a component of every hypervisor. Most hypervisor architectures link the virtual switches together with a common control plane, creating one large distributed and virtual switch. There are enhanced virtual switches on the market, as well as continual development of an open systems virtual switch called the Open vSwitch.
Virtual and physical switches are still two distinct network entities that must work together to enable the private cloud. Most network architects use VLANs to bridge this gap, but this requires physical and virtual switches to stay in lock step.
Some may think that one approach is to configure every possible VLAN on every trunk and every port in the data center. However, this is a brute force strategy that doesn’t scale well and is fraught with the risk of misconfiguration and potential security and compliance violations. Another approach is implementing a VLAN learning solution that dynamically manages VLANs across the virtual and physical network, particularly when VMs move. Some solutions work very well but are still proprietary. Edge Virtual Bridging (EVB) IEEE 802.1qbg is one developing standard for VLAN learning and mapping.
Supporting VLANs across Layer 3: NVGRE and VXLAN
Private cloud network VLANs require wide-scale bridging to support VM movement and communications. This approach does not scale well, nor support distribution of workloads across layer 3 boundaries. To address these VLAN issues, there are two multi-vendor supported protocols gaining support: VXLAN (Virtual Extensible LAN) and NVGRE (Network Virtualization using Generic Routing Encapsulation). VXLAN and NVGRE are IETF draft standards for encapsulating MAC-layer traffic over IP. By kicking things up a level, we open the door to distributing workloads across Layer 3 boundaries with the VMs still on the same Layer 2 network. This is exciting since it also breaks the fixed relationship between location and identity. It means a VM can maintain its IP address even after it moves to a different subnet. This may be effective but not necessarily efficient.
Where VXLAN and NVGRE fall short in the private cloud network
VXLAN and NVGRE are a big step toward a more dynamic and extensible private cloud network, but they are not the total solution. They are encapsulation protocols, and don’t have a control plane. Instead they rely upon other network functions. For example, VXLAN relies upon Protocol Independent Multicast (UDP PIM), and establishing VM-VM communications requires Layer 2 flooding and dynamic MAC address learning.
What's more, VXLAN and NVGRE do not solve a fundamental challenge of extending level 2 domains across the network core: the “network trombone.” Even though two VMs might be on the same switch, the traffic still may route out to the core network and back, looking like a trombone. This is comparable to making your home airport SFO because you live in San Francisco, then moving to New York but still traveling first to SFO before going anywhere else, including New Jersey. It’s a very inefficient architecture that doesn’t scale. Finally, VXLAN is a virtual construct and doesn’t connect to physical devices like firewalls, load balancers, etc.
Can SDN solve the VLAN challenge for private cloud networks?
Going beyond VXLAN and NVGRE, we need a strong control plane that integrates virtual and physical switching. On the open standards front, the most exciting activity is the OpenFlow project from the Open Networking Foundation (ONF). OpenFlow abstracts the control applications (controllers) away from the underlying data planes (switches).
OpenFlow holds the promise of a new way to switch packets across virtual and physical switches, eliminating the need for encapsulation, tagging and VLANs, while still supporting multi-tenancy, VM motion and scalability. It will truly be SDN delivering NaaS as part of the private cloud.
Unfortunately, the key word is “will” since OpenFlow implementation is growing but limited. It is important to press switch vendors on their plans for SDN/NaaS as well as to support OpenFlow. We need to move beyond the VLAN for private cloud networking. VXLAN and NVGRE are steps in the right direction, but they are not the end solution.
There are also additional reasons why we need to move to SDN/NaaS for the private cloud. In the next article we focus on the performance and compliance challenges for today’s private cloud networking.
The application tier is the meat of an OpenFlow network