What is a CPU-based SDN switch? Infoblox's Stu Bailey explains

InfoBlox's Stu Bailey talks about the differences between the CPU-based LINCX SDN switch and ASIC-based switches, and how users can get involved with the development of LINCX.

Editor's note: This is the second half of a two-part series exploring LINCX, an open source SDN switch. In the first part, we looked at how LINCX differs from Open vSwitch. In this portion, learn about the differences between LINCX and an ASIC-based switch, as well as the benefits of a CPU-based switch.

LINCX was developed with the goal of providing a fully programmable, open source switch that isn't dependent on application-specific integrated circuits (ASICs). We talked with Stu Bailey, Infoblox founder and CTO, who led a research team this past year in developing the switch, about the differences between LINCX and an ASIC-based switch, the types of controllers he's using for testing, and how users can get involved with the development of LINCX.

You described LINCX as being a CPU-based model or data plane. What is the difference between that and an ASIC-based switch?

Stu Bailey: So you define the data plane as a place where packets are inspected and action is taken against rule sets. An ASIC-based data plane would have that functionality built into the physical silicon … which by definition isn't programmable. You have to decide what the packet structure is, etc. It's a traditional networking model. Merchant silicon processor units … have, in general, more degrees of freedom in how we phrase headers to look to various parts of the structure, but it still has a focus on some subset of existing network protocols, some line-rate programmability. [In] CPU-based switches -- like the Open vSwitch [OVS] model -- every packet is brought into a general computing environment, and that's where the rule matching and packet transformations begin.

So the actual data plane of the network is just a server. There's no special support, other than maybe the interface cards that bring Ethernet frames off the wire and dump them into general memory. But it's really just a server with a lot of interfaces. That's what a CPU-based switching model, or a data plane, would be.

But ASICs do have built-in features? Is the goal with this switch to be able to choose what your features are and program them in?

Bailey: Yes, exactly. If you think about OpenFlow, it's a first pass at a remote construction set for remote programming Ethernet processors … it's like an assembly language: here's what you can do with OpenFlow. And then, if ASICs don't support that, you can't use OpenFlow. OpenFlow and most network processing units [NPUs] today have a good match with Trident II, etc. But if you have a CPU-based data plane, then really, it's like programming with any other kind of software. It can do anything as long as I have a way to get the software to the right CPU where the packets are flowing in. I can do arbitrary bit pattern matching on the frames, I can rewrite them arbitrarily, I can duplicate them, drop them on the floor in an arbitrary way … LINCX explores an interesting way of compiling rule sets, so you don't need flow tables. With the LINCX model, you couldn't do that with anything that has a table-based approach. So the programmability, as you go from ASIC to CPU, it's more programmable. It's as programmable as you can get, and we achieved 80% performance with OVS. It's nothing new to the Open vSwitch community, and achieving line-rate performance for various applications on CPU-based switching is a matter of [finding] the right applications for that.

What are the right applications for that?

Bailey: I think we're at the beginning of that cycle, partly because of how the industry is structured. What LINCX points to is the potential for programmability in price to trump performance in some set of applications for the network. It appears there is a growing class of those applications that are undisturbed by these models of programmability and price, both CapEx and OpEx savings.

It's early. I would say the obvious things driving CPU-based data planes and switching are high-density data center applications that have clusters that have to work together … big data applications and bandwidth. It's often the delivery packets at the right time versus the raw bandwidth. That can be the difference in the wall clock time for that kind of computation. So I would say, anywhere where you have lots of elements that are coordinating together to do a single ask. Complexity is increasingly the barrier to scale, rather than bandwidth. We're at the beginning of that cycle. I call them broadly distributed computing applications.

The Internet of Things points to another class of those, not necessarily high-bandwidth at all. You may have family members in healthcare situations where they have sensors on their person, and if the data doesn't get to the right place at the right time, someone may not come and administer the right kind of healthcare. Those aren't high-bandwidth applications -- it's sensitivity. Everything needs to get to where to needs to go at very specific times: real time-type applications. So network congestion could be more of a problem than general bandwidth.

What kind of controllers are you using to do testing for LINCX?

Bailey: Broadly speaking, LINCX supports OpenFlow 1.3 and 1.4, and it may be the only switch you can get that supports 1.4. Erlang has also given us increased velocity in features, which is nice. So we've tested with Floodlight … and primarily we like to use [FlowForwarding's] controller, which is called Loom, but OpenFlow and any controller that supports OpenFlow 1.3 and 1.4 can add from us LINCX deployments.

Will you be testing in OpenDaylight or have use cases submitted?

Bailey: We're not planning to do that, although it would be great if the community did that. We see this as a community effort, and we encourage people to download it and try it and say 'This works and this doesn't.' Our intention is to have it used broadly.

Who do you expect to be the end users?

Bailey: We're as excited to find that out as you are. Given as Infoblox is focused on the larger enterprise, [that tends to be] who is trying to use LINCX. But also in the FlowForwarding community, with Loom and a big data app called Tapestry … we run that inside Infoblox. So my hope is [for] early adopters in the enterprise, and generally, we've been pushing in that direction. WAN, LAN use cases in the enterprises, network monitoring, network telemetry, but with the ability to have a closed loop for action -- those are the kinds of capabilities that FlowForwarding can implement, but it's a community and the early days. So we're excited to see where it's going to go.

Dig deeper on Network hardware and SDN

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchNetworking

SearchEnterpriseWAN

SearchCloudProvider

SearchUnifiedCommunications

SearchSecurity

SearchDataCenter

Close