This article can also be found in the Premium Editorial Download "Network Evolution: Data center fabric wars."
Download it now to read this article plus other related content.
Although VMware is best known as a server virtualization company, its impact on the networking industry has been profound. VMware-based server consolidation has driven up bandwidth requirements; the embedded distributed virtual switch in VMware's hypvervisor software has virtualized the server access layer of most data centers; and the dynamic nature of virtual machines is driving much of the competitive innovation in the data-center networking market today.
During his keynote at Interop Las Vegas recently, Steve Herrod, VMware CTO and senior vice president of R&D, introduced the concept of the software-defined data center, a term which is naturally evocative of one of the hottest areas of networking today: software defined networking (SDN). SearchNetworking.com spoke recently with Allwyn Sequeira, CTO and vice president of cloud, networking and security, to learn more about VMware’s views on the software-defined data center, network virtualization and SDN.
Read Part Two of our Q&A
VMware has started promoting the concept of the software-defined data center. This sounds evocative of the software-defined-networking movement. Could you elaborate on this?
Allwyn Sequeira: The thinking was, we had vSphere and life was good. Now we need to virtualize the rest of the data center. As we started to evolve our thinking toward networking, the software-defined-networking momentum also happened to be in play at the same time. Most of our concepts of network virtualization, and some of the notions of SDN, are very applicable across network, compute, storage and security. It's a natural evolution to take the SDN construct and parlay that into a larger construct, which is the software-defined data center. Over time, we expect to see less and less specialty hardware [in data centers] and more x86 hardware with specialty software.
The other theme is that the virtual machine [VM] as a container has stood up well and become the abstraction in the workload space, whether it be Amazon or VMware. The VM has become a unit of server virtualization and desktop virtualization. Likewise, what's left of the data center, the virtual data center, becomes a new container.
What do you mean when you speak of data centers being composed of x86 hardware with specialty software? Is this the VMware networking vision?
Sequeira: A broader statement is x86 and merchant silicon is clearly the trend that exists. My point there is that the Ciscos of the world have spent lot of time and energy on ASICs, but a lot of software and control is increasingly happening on the x86 portion. Most of [data center] gear is headed that way. If you look at an F5 [Networks] device, it's mostly x86s. Look at a Cisco ASA, where they have firewalls and load balancers. That's all x86-based. We have versions of Cisco routers working with embedded hypervisors running on x86 to enable a whole bunch of VMs to run on them. If you look at some Cisco devices, they run with a hypervisor embedded. Not many people recognize it, but an increasing part of hardware, even in the Cisco world, is x86-based and merchant silicon. So the better way to say it is that X86 and commercial off-the-shelf merchant silicon is going to be prevalent in data center servers and the network itself.
I think Cisco might disagree with you. It continues to argue that ASICs give it an edge in the data center.
More on software defined networking and VMware networking
Cisco SDN: programmable networks, not OpenFlow
Guide to VMware networking strategies and architecture
Sequeira: If I were Cisco that's what I would say. And if I were Cisco, I would still look at merchant silicon. If I purely based my franchise on merchant silicon, that really says the networking play moves to software. What I think they're saying is that all that merchant silicon and x86 is fine, but when you get to higher and higher consolidation, there is still a need to scale out with rows of x86-based, top-of-rack switches. You still need to come out to core. And at the heart of it, you still need very fast switching silicon there, the likes of which are only produced by the Ciscos and Junipers of the world. And that is a valid statement.
Any Cisco box has three components: input and output ports, software processing and then the fastpath. The fastpath -- tagging and protocols -- is where Cisco has always distinguished itself from its peers. Inside the very core of these boxes, having that fast and specialized silicon might make sense. But all the programmability and scale-out and manipulability will come out of the slowpath and control plane portions of these boxes.
Click here to continue reading part two of this Q&A.