There's increasing interest in new container networking software to run scalable cloud apps. As we shall see, networks...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
can be connected to containers in so many ways, and although the choices may be bewildering at first, they provide options to fit the different infrastructure approaches that people are experimenting with today.
Does container networking differ from traditional SDN?
SDN systems are still undergoing evolution. Many were created as logical versions of physical networking -- with abstractions like ports, networks or subsets, which were in turn connected to logical network switches and attached to virtual machines. OpenStack's networking as a service project, Neutron, and other SDN software, supported this concept. This was fine as long as people used virtual machines (VMs) whose workloads mirrored physical servers. Although VMs are easier to spin-up compared to provisioning physical machines, they appear relatively static compared to modern container-based computing. With microservices-based systems, large numbers of containers may be dynamically provisioned and deleted quickly, so one needs a networking system that can accommodate this.
Basics and limitations of container networking
Fundamentally, one associates a container with a network, which, in turn, may be bridged to a network interface. That is what the default Docker networking system does, but it is too rudimentary for real-world use. Its limitations are especially acute if there are multiple hosts -- since containers on different hosts cannot talk to each other -- or if it is used with a container management or orchestration system such as Kubernetes. Docker has introduced a new multiple host container networking framework called libnetwork to address these challenges.
Kubernetes implements networks to accommodate its deployment of servers within pods, which are collections of applications and containers that share resources. Each pod -- again, not a container, but a collection of containers -- may be assigned an IP address, unlike traditional systems where each server or VM is assigned an IP address. Instead, there is coordination of containers and ports to enable apps to communicate with each other.
This goes back to how applications are designed and deployed. Apps deployed within containers that in turn run in pods are designed for high availability and can readily deal with failures. In contrast, in a traditional virtualization environment, one treats VMs carefully to avoid failures. This illustrates the differences in application workload design and infrastructure operations. Some people use the pets and cattle analogy for VMs and containers -- unlike pets and VMs, cattle and containers are interchangeable.
New SDN systems
Traditional SDN frameworks are evolving to support container deployments. Orchestration systems such as OpenStack or vSphere were originally designed for virtual machines, but they are now being adapted to run containers. For example, the Kuryr project maps Docker networking with OpenStack Neutron networking. VMware's NSX also supports container deployments. These adjustments to meet container networking needs protect existing investments in infrastructure.
New SDN frameworks, however, are container-compatible right out of the gate. These new systems benefit app developers the most, since they isolate them from dealing with low-level networking infrastructure considerations and create a simpler view of the network. Developers care about having connectivity, but are less concerned with some traditional concepts, such as Layer 2 networking. An IP address at Layer 3 is all they need, and the new SDN systems focus on providing that well. They simplify configuration and speed up deployment.
Here are a few examples of new open source network systems.
- Calico: A Layer 3 virtual networking service, it uses built-in Linux network forwarding -- IP tables -- and uses the BGP protocol, commonly used in the Internet, for sharing routes within a data center. This system can run on a variety of platforms, such as OpenStack, Docker and Kubernetes.
- Contiv: A project that combines an application's intent with the infrastructure's operational policies. It is available as Docker network and storage volume plug-ins.
- CoreOS Flannel: An overlay SDN system designed for container networking. It enables containers on different hosts to communicate with each other as though they were on the same host.
- Weaveworks Weave Net: An overlay SDN for containers that functions on many platforms and is designed to work across multiple hosts. It provides automatic discovery of services and avoids port-mapping. This makes it easier to construct applications consisting of multiple containers.
How do they fit together?
It's hard to keep track of how all of these systems relate, but thankfully, the developers have designed many of them to interoperate. The basic container system, such as Docker, has a default implementation that works out of the box -- "batteries are included" -- but plug-ins may be used to change the functionality -- "batteries are replaceable." This is similar to OpenStack's Neutron, which also has a plug-in approach.
Networking is the glue that binds the containers together, but early efforts were cumbersome to use. These new SDN systems are still works-in-progress, but they are worth exploring. Development and production teams may be able to use them to simplify their networking infrastructure for new container-based apps. Depending on the system, it may simplify security, multiple data center deployments and deployment efforts.
I recommend that your team examine new open source projects, which promise to simplify and boost your development and operations teams' productivity. They are not exactly shrink-wrap easy, but it's worth noting that there are often companies behind them to provide guidance.
Container networking: Where Docker and SDN intersect
Software containers in the network? Proceed with caution.
AWS challenges Google Kubernetes