Evaluate Weigh the pros and cons of technologies, products and projects you are considering.
This article is part of our Essential Guide: SDN basics for service providers

Centralized vs. decentralized SDN architecture: Which works for you?

Two SDN architecture models -- centralized and decentralized -- have much in common, but key differences center on the use of the OpenFlow protocol.

Editor’s note: In the first of our two-part series on understanding software-defined networking (SDN) technology...

strategies, we examine the differences between distributed and centralized SDN architecture models. In part two, we dig deeper into the theoretical differences between the two architectures.

Enterprises and network operators that want to experiment with or deploy a software-defined network (SDN) quickly realize there are two different models of SDN -- centralized and decentralized -- and each model has different infrastructure elements and requirements to consider.

A successful deployment will require choosing the right SDN architecture, then testing it in an organized way, based on the right infrastructure.

At a high level, software-defined networks have three main elements:

  • Physical networks of devices, where equipment that forwards packets is actually located throughout the network;
  • Software controllers, where software accesses and controls network services; and
  • Connecting technology that links everything downward from the user to the devices that create network services.

Centralized SDN architecture focuses on the OpenFlow protocol

Looking first at a centralized SDN architecture -- the model that standards groups like the Open Networking Foundation (ONF) support -- the key element is the connecting technology that communicates central control decisions to devices. OpenFlow has become the official protocol to use in a centralized SDN model to make high-level routing decisions. As a result, the creation of central-control SDN must be based on selecting devices and control software that support the OpenFlow standard.

With OpenFlow, there is good and bad news.

The good news is that virtually all of the major switch/router vendors have announced OpenFlow support, even though there are several versions of the OpenFlow standard and vendors may not have released software for the latest version. A number of OpenFlow-compatible controllers can provide central control for a community of devices, and many OpenFlow-based SDN trials and deployments have been conducted using these tools.

More on SDN architecture

Dell's focus on virtual network architecture and SDN

Pica8's SDN reference architecture for cloud providers

Big Switch bets on OpenFlow SDN architecture

The bad news is that OpenFlow is not yet a complete standard. It doesn't have a mechanism for fully managing devices, particularly for controlling port/trunk interfaces and queuing. OpenFlow doesn't communicate network status fully either, so central control software will need some source of information about loads, traffic and operational state for nodes and trunks. OpenFlow also lacks a specific process for establishing the paths to take between controller and switch, so there's a question of how OpenFlow networks "boot from bare metal," meaning how they become operational from a newly installed state. Finally, OpenFlow controller implementations offer only basic OpenFlow command generation and lack a network-wide view of conditions needed to create routes.

All existing controllers have northbound APIs to management software to gain these features, but it is still vague what software is available to exercise these interfaces and how it would be integrated with the controller. All of this makes it hard to conceptualize how a current network could evolve to support a centralized SDN model. The capabilities of central control software -- above that of the SDN controller itself -- are the key to success for the centralized SDN model.

An area of concern for the centralized model in the network device element is the way OpenFlow forwarding is actually supported. Most modern switches and routers use custom silicon to manage the data-forwarding process, but not all implementations of OpenFlow take full advantage of this "fast-path" technology. As a result, OpenFlow forwarding performance is slower than native protocol support would be.

Even if OpenFlow is supported fully by silicon, will it forever be a parallel option to current IP and Ethernet handling? If it is, network device costs wouldn't fall, and the expected financial benefits from the use of the centralized SDN model would be harder to realize. Native OpenFlow switches may be the answer, but these are very limited in number.

Distributed SDN architecture proponents believe in evolution

For the distributed SDN model, evolution is the goal. Here the focus of SDN development is the control software that the centralized model is addressing in only a limited way.

Distributed SDN presumes that switches and routers are already deployed throughout the network. It is also a given that these devices already support most or all of the connecting technology needed, in the form of protocols like MPLS, GRE and BGP. The goal is to expose the traffic and connectivity management capabilities of the current networks to a higher software layer, which would then frame these capabilities as "virtual network services" to the cloud or to applications. What matters is that northbound APIs or interfaces allow software, including cloud stack software, to control network services.

In the area of connecting technology, the distributed SDN model has a very different requirement set. Because it doesn't centralize routing decisions as its competing model does, the distributed model doesn't need OpenFlow, though it may be supported at some point. What it needs, however, is a practical way of gathering a considerable amount of status and performance information from the network, which means gathering it across all of the protocol layers, device types and vendors involved. Without this data, it is impossible to ensure that the virtual network services created in a distributed SDN model conform to software needs because network conditions can't be accurately determined. As a result, monitoring technology is the key to the success of the distributed SDN model.

Virtual networking, or the use of overlay/tunnel technology like that popularized by Nicira, can be viewed as a software control element that you can overlay on either SDN model. Virtual networking software segments physical networks into multi-tenant or multi-application networks, and it could play a role in building that top software-control layer that all SDNs need. Standards for northbound APIs from either the central or distributed SDN models are not defined, however, so it would be difficult to marry virtual networks to SDNs except by customization. For now, this adaption is almost a fourth component element of an SDN deployment.

The basic elements of SDN -- a physical network of devices, control software and connecting technology -- are common to all SDN models, but the implementation of any of these elements may not be compatible with both models. For now, those who want to test or deploy SDNs will likely have to make a choice, at least initially, about which model to adopt and then ensure their infrastructure matches the model requirements.

About the author:
Tom Nolle is president of CIMI Corp., a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecommunications strategy issues. Check out his blog for the latest in communications business and technology development.

This was last published in March 2013

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.