There has been an increasing interest in how network policy is applied in an SDN setting. After all, when one separates the control from data planes of the network, there are all kinds of ways to define and then apply policy.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
What's interesting is that translating this network policy to devices in an SDN environment is not a one-protocol show.
SDN policy: Declarative vs. imperative
Policy is an overarching term that describes network constraints, configurations and settings. It includes the enforcement of access control, traditionally provided by Access Control Lists (ACLs), or setting of QoS.
One interesting way of defining policy is to detach the intent of the policy from the way it is implemented. This is the difference between declarative and imperative control in an SDN environment.
For example, it's better to define how a three-tiered web application is designed, enabling the middle-tier app server to talk to the web servers and the back-end database tier, but to prevent the web servers from talking directly to the database tier. In that scenario, an imperative model would have required specific definitions of ACLs, which would be defined specifically for the infrastructure in the deployment -- i.e. switch commands using IOS or NX-OS -- which only makes sense for the network administrators and is a notion that's distant from the concerns of the application owners and architects.
Policies need to be defined at a high level, manipulated by the tenants of the network and pushed down via some protocol to the network. Examples of such SDN protocols are Open vSwitch Database Management Protocol (OVSDB) and Cisco OpFlex.
Many roads to SDN policy translation; understanding rendering
The missing part in these discussions is the translation of the policies via a southbound API into the underlying network. Note that the intent of the policies makes no mention of how the data in the networks ought to be forwarded. This translation is often called rendering, which contains much of the complexity of transforming the format of intent to another format of implementation.
We'll use an example of some rendering technologies and modules as defined in the Group-Based Policy (GBP) project within the Open Daylight Project. GBP is an open source project with contributions from multiple companies including Cisco, IBM, HP and Midokura.
Types of renders available include:
- Open vSwitch based network overlay renderers
- OpenFlow/Open vSwitch
Notice that renderers include technologies that are sometimes perceived to be competitive in the technology landscape, such as OpFlex vs. OpenFlow. These are actually not competitive since the policy renderer ought to be agnostic to the lower levels.
In theory, there may be a renderer to a classic command-line interface, or one of various APIs available to control switches. In practice, we look at these three rendering systems as an example because (1) many modern cloud-based infrastructures will be based on hypervisors running virtual machines, and many of these environments will have an Open vSwitch (2) OpenFlow has been an SDN enabling technology and (3) OpFlex is a newly proposed protocol that has been designed to pass policy to network devices.
We cannot go into details of each renderer in the space given, but we will describe some of the information that each renderer needs to maintain to illustrate this example.
End point management: Policies are defined to be relationships between groups of network endpoints, as opposed to network endpoints themselves. A renderer needs to maintain a mapping between network interfaces to endpoint groups.
Switch configuration: One needs to manage the configuration of the network switches themselves, using a technology such as OVSDB.
Tunnel Management: In an overlay network, packets are encapsulated within an IP transport packet. Depending on whether the source and destination endpoints reside within the same or different Layer 2 domains will determine whether they need to be routed via layer 3 communications.
We listed some of the implementations and decisions made by various renderers. The diversity of various southbound protocols and devices makes the work challenging, but as long as each device can support the semantics of the policies, then a translation is possible by the renderer. If the devices are unable to perform a function, perhaps based on feature limitations, or resource limitations -- out of table space -- then all is not lost since a renderer can signal an exception to network orchestration system that it is unable to perform that function, and that the intent of the policy cannot be applied.
About the author:
Dan Conde is the Director of Products at Midokura, a network virtualization company. Dan has held product roles at VMware, Rendition Networks, NetIQ and Microsoft. He has also held software engineering positions at Digital Equipment Corporation and Xerox Corporation, both in Palo Alto, California. His interests have mostly focused on system software and systems & network management, but also had some detours into user interface design. Dan received his Masters of Business Administration from the Haas School of Business, University of California, Berkeley and his Computer Science degree from the University of California at Berkeley.
How Cisco OpFlex tackles network policy control
BGP as a hybrid SDN protocol
The road to Pyretic
Understanding OpenFlow basics