In SDN, vendors have finally transitioned from pondering product to actually shipping it. At the heart of that transition are Cisco Systems Inc. and VMware Inc., which have fleshed out network virtualization and SDN strategies centered around their acquisitions of Insieme and Nicira respectively. The VMware network virtualization strategy centers completely on software, while Cisco SDN is hardware driven.
Requiring a high-end switching investment may prove to be a problem. Not everyone needs 1,000+ ports of non-blocking 10 Gbps interfaces, and it will be a while before they do.
VMware network virtualization: A toe in every hypervisor pool
VMware NSX comes in two flavors: vSphere and multi-hypervisor. The vSphere edition integrates into existing VMware ESXi deployments and builds upon the distributed virtual switch (dVS). Layer 4 through 7 functions such as routing, firewall and load balancing are borrowed from virtual cloud network security (vCNS). Meanwhile, the multi-hypervisor edition serves the needs of Xen and KVM deployments, building upon the Open vSwitch project. While NSX can take advantage of hardware Layer 2/3 gateway services from switch vendors such as Arista, Brocade and Juniper Networks, NSX is not tied to any of them and will run across any physical IP network. This will score points with administrators because it doesn't force a potentially unpalatable vendor selection and costly upgrade to get SDN features into an existing network.
Creating network overlays with VMware network virtualization
The NSX controller cluster consists of upwards of three logical systems that activate various hypervisor functions, such as routing, switching and Layer 7 firewalling. This is controlled by either the NSX Manager Web interface or the NSX RESTful API. NSX builds upon vSphere's ability to easily stand up and tear down hosts, extending it to the network as a whole. Logical networks, firewalls, routers and load balancers can be created with little more effort than documenting them in Visio. By creating overlay networks on existing switches, much of the complexity is hidden from the underlying physical network as traffic is tunneled point-to-point using the GRE, VXLAN (UDP transport) or STT (a TCP-like transport) protocols.
However, this abstraction is a double-edged sword. While it is no longer necessary to make manual changes to stand up a new service, the physical network loses application layer visibility. While Differentiated Services Code Point (DSCP) markings may be copied from the inner to the outer packet, inevitably traffic will leave the virtualized network and break out to a physical network. Traffic management devices, such as intrusion detection sensors, packet shapers and WAN optimizers that make decisions at Layer 7, will be blinded to the tunneled traffic. Network management tools just do not have the flexibility to understand what is happening in both network layers and may be rendered useless. While all of these problems are solvable in the long term, it may force traffic re-engineering that network managers had not bargained for.
With Cisco ACI, building anew
Cisco ACI is a framework of hardware and software rather than a software suite designed to knit together existing networks. Built on top of the Nexus 9000 switches, it is controlled by Application Policy Infrastructure Controllers (APICs). The APICs sit outside of the data path and stitch the Nexus physical underlay to the Nexus 1000v virtual switch. Perhaps the most powerful aspect of ACI is the introduction of this hierarchical object model:
- Context (virtual routing and forwarding)
- End point groups (EPGs)
- End points
- Application network profiles
While NSX is quicker to deploy in the short term, it requires customers to find ways of connecting overlay and underlay management, something that is not straightforward in existing tools.
Access control, service insertion and QoS policies are defined between the EPGs and connected using contracts. Different applications tiers, such as Web, database and management, are collected into endpoint groups and managed together as distinct application network profiles. This abstraction is representative of the way applications are managed at the moment and doesn't force the re-architecture to meet an arbitrary nomenclature.
Cisco SDN vs. NSX: Comparing the challenges
Cisco's approach of a hardware-led solution solves many of the visibility problems that an NSX deployment could encounter. The ability to gain a "single pane of glass" view on the underlay and overlay networks is not to be undersold and will score points in some enterprises. However, to achieve this, Cisco ACI relies on a single, specific generation of high-end hardware -- the Nexus 9000 switch.
Requiring a high-end switching investment may prove to be a problem. Not everyone needs 1,000+ ports of nonblocking 10 Gbps interfaces, and it will be a while before they do. Investment in the existing estate can be protected by up-cycling Nexus 2000 Series Fabric Extenders into the ACI solution, providing, of course, one deploys 9000s. Cisco has said that the Nexus 5000 and 7000 will still be needed at the network edge, but this argument may not hold. Fundamentally SDN is an end-to-end proposition; long term, one should avoid islands of control as this diminishes the viability of the solution.
What might make the Cisco high-end investment more palatable is that ACI provides an answer to many infrastructure growth questions with a powerful switch wedded to a native controller. What's more, by making ACI monolithic, Cisco can wrap up ACI in a nice neat Technical Assistance Center package. Indeed, Cisco's marketing sets the expectation that an ACI deployment is a major professional services engagement, not something a customer could attempt as a solo endeavor.
More on Cisco and VMware SDN strategies
Cisco and VMware: Which will network pros choose?
Why VCE Vblock won't die in the Cisco-VMware battle
Competitors sound off on Cisco ACI
Networkers say Cisco SDN is far superior, if you can rip and replace
Why is VMware NSX pricing still a secret?
Will VMware network virtualization launch a turf war?
Meanwhile, NSX can be folded into existing vSphere deployments and may win over budget-pressed administrators. This will make it easier to demonstrate the value of SDN with the kit they have, rather than the kit they may ultimately need.
Yet while NSX is undoubtedly quicker to deploy in the short term, it requires customers to find ways of connecting overlay and underlay management, something that is not straightforward in existing tools. Taking NSX beyond the proof-of-concept stage will ask questions of the existing physical infrastructure for which many may not have answers. What's more, VMware's NSX pricing remains unclear and could get costly depending on what the per-VM price will be for NSX implementation.
Ultimately, however, networks are built incrementally. Most administrators and architects will choose and, indeed, need to build upon what is already in place, rather than conduct a full rip and replace. That will be the case at least until SDN and network virtualization are fully proven. As a result, VMware may make swifter initial headway, while Cisco makes a larger long-term dent.
About the author: Glen Kemp is an enterprise solutions architect for a U.K.-based managed services provider. He designs and deploys network and application security tools, including access control, remote access, firewalls and other "keep the bad guys out" technologies. He is also an experienced professional services consultant; delivering elephants and not hunting unicorns. His blogs can be found at sslboy.net and at the Packet Pushers Podcast. Follow him on Twitter @ssl_boy.
This was first published in January 2014