Traditional routing and switching platforms haven't met cloud providers' requirements for building and managing cloud services -- a reality that equipment vendors were once unwilling to accept. What cloud providers want is an application programming interface (API) for the network layer so they can control the flow of their specific applications across their infrastructures. We now see this capability emerging with the software-defined network and OpenFlow specification.
Google has pushed the equipment vendors for years for software innovations, but the lack of standards has slowed innovation. Moreover, equipment manufacturers treated the cloud providers as a niche market, often dismissing their requirements as specific to only a handful of companies. Additionally, the unique demands of high-performance computing (HPC) infrastructures, based on a programmable network layer designed to maximize performance, were largely dismissed by vendors as relevant only to specialized quantitative analysis applications.
Equipment vendors commit more resources to opportunities with short-term financial results and broad customer appeal than those that require long research and development trails for their so-called niche cloud provider customers.
But cloud providers and large data center operators aren't waiting around for vendors anymore. Deutsche Telekom, Facebook, Google, Microsoft, Verizon and Yahoo established the Open Networking Foundation (ONF) -- naming Urs Hoelzle, senior vice president of engineering at Google, as its first chairman -- to foster the development of the OpenFlow specification. The mission of ONF and the rest of its members, most of which are equipment manufacturers, will be to develop the OpenFlow specification and create an open and nonproprietary network API that can be freely licensed by the member organizations. Thanks to the ONF, networking is being driven by the users of the network equipment and not its manufacturers.
OpenFlow, software-defined network eases management for cloud providers
Through OpenFlow, the software-defined network has arrived. We have seen a similar evolution in the wireless LAN marketplace: Wireless access points today are managed through a central controller for ease of management, access control and mobility. This was not the case when wireless LAN was first introduced in the late 1990s. The wireless network has evolved, and now the wired network is evolving as well.
OpenFlow allows the control plane to be separated from the data path. A centralized controller manages a number of switches and has a global view of the network, its state and its current users. From this vantage point, the controller can very easily control the network in new and novel ways customized to the needs of a particular network. OpenFlow and the software-defined network offer many potential applications or use cases for cloud providers, such as managing large data sets contained in one data center or multiple data centers capable of connecting through dark or dedicated fiber.
OpenFlow and the software-defined network help cloud providers be more nimble with network management. Up to this point, the operators have had to use the same networking protocols that the vendors developed and supported through standard bodies, such as the Internet Engineering Task Force (IETF). The standardization process can take years to reach agreement and then an additional two to five years for the hardware to be updated to support the new software protocols. With OpenFlow, a network software developer can write code to manage specific types of flows across network devices based on cost of the equipment, load on the network and the value of various types of traffic, for example.
Currently, there are many experiments going on in the industry that are tucked behind the paper curtain of nondisclosure. How many cloud providers will publicly announce production implementations of the OpenFlow and software-defined networks? Which will be the first handful of vendors to support OpenFlow across all of their cloud providers' platforms?
About the author: Marshall Bartoszek is the principal analyst for ACG Research's data center practice, where he runs a comprehensive data center program consisting of training modules to support service providers' and vendors' go-to-market processes and strategies based on industry best practices.
This was first published in September 2011