OpenFlow networking has enjoyed a tremendous amount of hype in recent months, even playing a starring role at Interop 2011 in Las Vegas, but it's the concept behind the protocol—software-defined networks—that holds promise for the real revolution in virtualization and cloud networking.
In a software-defined network, switches and routers take some form of direction from a centralized software management element. In the context of OpenFlow, the control plane is abstracted from the data forwarding plane. A centralized controller, which maintains a real-time, holistic view of the network, defines network paths as "flows" and distributes this flow data to individual switches and routers. With these flows, the controller coordinates the forwarding of data across all network devices, enabling the automation and granularly managed dynamic provisioning necessary in virtualized environments and cloud networks.
"OpenFlow is one instantiation of software-defined networks," said Mark Fabbi, vice president and distinguished analyst with Gartner Inc. "This is an idea that had been around for a decade or more. If you look at Juniper's QFabric, that would be an instantiation of a software-defined network, because you've got switches and a fabric-based capability in the core. And then you have all of these devices talking to a controller, and the controller determines paths and what services to apply."
This approach contrasts with today's networks, which have a distributed and uncoordinated control plane. The switches and routers each maintain routing or MAC address tables with data about the network elements around them, and they make forwarding decisions based on that data. This approach has worked well, to a point. But IT infrastructure is much more dynamic than it used to be, thanks to virtualization. And the network needs to adapt.
OpenFlow and the software-defined network: Integrating the OSI stack
OpenFlow and software-defined networks fulfill one chief goal: making the network more responsive and adaptable to the rest of IT infrastructure. Today's networks are static. They focus on Layer 2 and Layer 3 of the OSI model, but that model doesn't support server virtualization.
"Before, networking people always talked about how it's really all about the packet. Well, is it really about the packet? Networking just thinks Layer 2 and Layer 3, but in reality all of the OSI stack has to be better integrated to understand what's going on," said Andre Kindness, senior research analyst with Forrester Research.
Virtualization has made IT infrastructure more dynamic, so networks need to be responsive to changes. The network must be able to automatically adjust VLANs, QoS policy and ACLs when a server administrator moves a virtual machine from one server to another.
"Right now on average it takes about two days to move a virtual machine because it's not automated," Kindness said. "It's automated in the server, but once you go out to the network, a change ticket has to go out and then a network engineer has to change one part of the network before the VM can go there.”
Basically, the networking world is not tied into the world of applications and is confined to managing the packet, said Kindness. A software-defined network is about “leveraging both the server and networking world more efficiently and tying all those worlds together. It's about the ability to look at workload utilization across different types of hardware and make a decision about how a packet should flow. It's about overall efficiency and utilizing assets in the best way possible," he said.
Software-defined networks: Who can take you there?
OpenFlow is not the only route to software-defined networking. Arista Networks is partnering with VMware to create its own flavor of software-defined networks centered on making the network more responsive to changes in virtualized server infrastructure. By opening its switching firmware to VMware, Arista's switches can automatically adjust to the instantiation of new virtual machines or the migration of virtual machines across the infrastructure.
"We've been doing some internal developments around the integration of [Open vSwitch] with our switching OS," said Doug Gourlay, Arista's vice president of marketing. "What we are doing is treating things like vSphere as a networking controller. When you look at vSphere, it can control VLANs, QoS policy and ACLs, all things that are bound to a virtual machine. We suborn ourselves to that and say that for some portion of what our network equipment can do, we are going to take direction from vSphere. When you create a virtual machine in vCenter and when you move that virtual machine, everything the network needs to do we will do automatically via a defined and prescribed API between our switches and vSphere."
As mentioned previously, Juniper's QFabric architecture is a software-defined network to some extent, but the architecture isn't available yet. Juniper has released the data forwarding element of QFabric, the QFX3500, but the QF/Interconnect core device and the management appliance QF/Director, which is analogous to the controller in a software-defined network, won't be available until later this year.
The market for OpenFlow based products is very much in its infancy, but the OpenFlow architecture's open approach is very much in contrast to Juniper's architecture. To build a software-defined network with QFabric requires all QFabric products. To build a software-defined network with OpenFlow, one needs an OpenFlow controller and switches from any number of vendors that support OpenFlow.
No major networking vendors have released OpenFlow support on their switches yet, although nearly a dozen demonstrated the technology at Interop this year. NEC Corp. is the first vendor to release a product. Little known as a networking vendor outside of its home Japanese market, NEC has been collaborating with the university researchers behind OpenFlow for five years. This work culminated with the announcement of NEC's ProgrammableFlow product line at Interop. The products won the Best of Interop award at the show.
NEC's ProgrammableFlow consists of two main products so far. The first is an OpenFlow enabled switch, the PF5240, with 48 ports of Gigabit Ethernet (GbE) and four uplink 10 GbE ports. The second is the ProgrammableFlow Controller (PFC)—OpenFlow controller software that can make all the forwarding decisions for the PF5240 switches and any third-party switches that support OpenFlow in the future.
Several startups are working on OpenFlow controllers and building partnerships with switching vendors behind the scenes. These include Big Switch Networks and Nicira Networks.
OpenFlow architecture as an alternative to spanning tree?
A software-defined network based on OpenFlow does more than adapt to changes caused by virtualization, according to Kyle Forster, co-founder and vice president of sales and marketing at Big Switch Networks. By moving the control plane from switches and routers to a centralized controller, OpenFlow also enables advanced multipath forwarding techniques. That means that instead of relying on TRILL or Shortest Path Bridging to get around the limits of spanning tree protocol, enterprises can define multipath flows in an OpenFlow controller. Since the controller has a complete picture of the network, it can prevent loops from occurring.
An OpenFlow controller also enables programmability on the network, Forster said. By opening APIs on a controller, third parties can develop software that uses an OpenFlow controller to run advanced network functions and services across the network. For instance, some researchers have explored developing load balancers on top of the OpenFlow controller. Security vendors could develop virtually-distributed firewall or intrusion prevention software that imposes security policies on individual switches and routers through the flows defined on an OpenFlow controller, Forster said.
Big Switch Networks is developing software that allows network engineers to build a multi-tenant model on top of their network infrastructure, something especially appealing in a cloud computing environment. An engineer can use Big Switch's controller to create a virtual switch composed of ports on multiple physical switches, and present that to system administrators as a fixed network that serves their servers and applications.
"The demo we did at Interop showed this architectural view where you take two ports from here and five ports from there and seven ports from this switch over here and roll all those up into one big virtual switch," Forster said. "An administrator can log in and all they see is a 14-port switch. What they don't see is that those ports are sitting on many different machines across the data center. On one side, the administrator feels like they own a full physical switch, but underneath we are laying out that switch on the physical hardware and doing isolation and segmentation so that if the admin does something that crashes that virtual switch, the hardware underneath will survive just fine."
OpenFlow hype vs. reality
Despite the promise of OpenFlow, until these vendors begin to release and sell product, the protocol is essentially a "science project" said Gartner's Fabbi. The market is a couple years away from having multiple products, let alone a full ecosystem. Today OpenFlow vendors are focusing on cloud computing providers, who have the most dire need for the capabilities of software-defined networks. But scalability is a concern.
Like the controllers so prominent in wireless LAN architecture, OpenFlow controllers have the potential to create a bottleneck, with switches and routers delegating forwarding decisions to a controller running on a server.
"I honestly don't think it's going to work," Arista's Gourlay said. "The flow setup rates on Stanford's largest production OpenFlow network are 500 flows per second. We've had to deal with networks with a million flows being set up in seconds. I'm not sure it's going to scale up to that."
However, OpenFlow vendors are well aware of scalability concerns. Forster said BigSwitch is developing clustering technology for its controllers. It's not building one large controller that can handle all the flows on a large network. Instead, it is developing controllers that work cooperatively to manage the network.
OpenFlow controllers will also continue to let switches and routers make plenty of their own forwarding decisions. In fact, they are counting on it. Forster said the switches within an OpenFlow architecture aren't going to be dumber devices. Instead, they need to be beefier and capable of running large numbers of OpenFlow rules and supporting other OpenFlow applications. He pointed to how access points in a controller-based wireless LAN architecture have been getting beefier in order to support applications like rogue detection and spectrum management.
Also, many OpenFlow switches will continue to make plenty of forwarding decisions on their own, only seeking instructions from a controller when they receive unexpected flows of packets.
“Switches can forward packets based on the flow-based rules sent down from an OpenFlow controller,” said Don Clark, director of business development for NEC Corp. “In a case where we have lots of virtual machine mobility with lots of changes to the network, we would process this reactively. When a packet arrives that does not match an existing rule [on a local switch], that first packet is forwarded up to the controller. It's processed in logic and then the new rule is set down in the flow table."
Let us know what you think about the story; email Shamus McGillicuddy, News Editor.