JNT Visual - Fotolia
Linux containers could be poised for a comeback in production data centers with the open source Docker tool set, presenting both opportunities and challenges for network engineers.
Linux containers are a software construct that can host an application and its dependencies as an isolated process on a Linux kernel. It allows containerized applications to share that kernel with other containers. The open source Docker software makes Linux containers portable, scalable and secure. Docker Inc., the company behind the open source project, announced version 1.0 of the Docker software last month, for which it will offer commercial support.
Docker containers are a potential alternative to (or even improvement upon) hypervisor technologies. In a hypervisor host, every virtual machine (VM) contains a guest operating system (OS) in addition to an application. A Docker container doesn't need a guest operating system. The application and its dependencies simply run in the container on top of the host operating system.
"A Linux container … is a unit that allows you to isolate and run an application on a host operating system as compared to a VM that tries to take an application measured in megabytes and a guest operating system measured in gigabytes and emulates a whole server that runs on a hypervisor, which itself is running a host OS," said Ben Golub, CEO of Docker Inc. "Containers are exceptionally lightweight. You can get 20 to 80% greater density and a significantly faster startup and shutdown with Docker than you can with a normal VM."
Like hypervisors, Docker containers give system administrators a way to allocate compute resources within servers and across data centers. And like hypervisors, containers introduce a new set of virtual network elements that network engineers will have to learn about.
The basics of Docker networking
"Docker provides the [networking] primitives to allow you to specify how different containers network with each other," Golub said. "It's evolving very quickly to the point where you can take a complex application, Dockerize each of its components, and distribute them across a large number of servers and have coordination between them."
The basic network primitive in Docker is an Ethernet bridge called "docker0." When Docker boots up on a Linux server, it creates a default docker0 bridge inside the Linux kernel, and docker0 creates a virtual subnet on the Docker host. It can pass packets back and forth between containers on that host.
Docker also creates a pair of virtual Ethernet interfaces on each container, randomly assigning them an IP address and a subnet from a private address range not already used by the host machine.
Docker's networking elements were designed for use by application developers, who might first spin up a container on their laptops to provide connectivity between the different elements of an application a developer is coding.
"[Docker makes] it so you don't care about IP addresses and network interfaces [on a container host]," said Mike Lowell, a product management engineer for F5 Networks, which uses Docker in its own development environment. "You don't have to configure it. It just works. [When there are containers on multiple servers], then you have to think about regular network concerns."
Right now, Docker "doesn't have much to say" about network connectivity and services across multiple machines, Lowell said. The primitive network elements in Docker can provide some connectivity, but there are no insertion points for load balancers or firewalls, for instance.
Docker makes it easy for developers to port containers and the connectivity provided by docker0 from a development environment to testing and staging environments. Some early users are even porting these containers into production, and that's where networking pros need to be involved.
"Just like … when VMware first came out, where it was a single VLAN going to a server -- the next thing you know, you have 10 VMs on a single host on the same subnet and the networking guys didn't know about it," said Jason Edelman, a network engineer who works at a large systems integrator. "As far as Docker goes, there's no stopping an app guy or systems guy from loading 50 containers, all using the default Docker networking constructs. It means more endpoints on the network, which could have an impact due to MAC table size and different attributes like that. I haven't heard of too many of my customers looking at [Docker] just yet, but any new application residing on a [Docker] server could happen before the networking teams are engaged."
If Docker is in your environment, get involved
Server virtualization continues to be a source of conflict between network and systems teams as they hash out who controls what in the virtual networks that exist on each host. Docker containers could be yet another battlefield for the two groups.
"There is a push and pull within IT organizations of who takes care of what pieces," said Eric Hanselman, chief analyst at New York-based 451 Research. "Virtualization pushed different capabilities to different parts of IT. You had server people who had the ability to spin up new networks. The same thing exists in the container world, except it moves up into the application environment. We're certainly going to have container sprawl."
Networking pros have an opportunity to impose controls that could prevent some of that sprawl. For one thing, Docker allows users to replace the default networking functions with Open vSwitch (OVS) and other native Linux networking elements. OVS is a familiar technology to most networking pros at this point, and they could use it to maintain order.
"As a network person, I say just plug your containers into OVS and it works exactly like a virtual machine would," said Edelman, who has blogged about his early testing of Docker.
Orchestration brings order -- and challenges -- to Docker networks
OpenStack is also working on orchestration of Docker containers with Nova, just as it orchestrates VMs today, Edelman said. When that's the case, OpenStack will bring its Neutron networking elements to bear on Docker as well.
To address the need to orchestrate containers across multiple hosts, the Docker community is developing a number of orchestration tools. Libchan and Libswarm are two prominent ones that enable network services orchestration. Networking pros might find these tools problematic.
"[Libchan and Libswarm] are almost one step beyond what overlays are doing today," Edelman said. "Overlays are abstracting out the physical network. Libchan in Docker wants to abstract out even more, so it's in the hands of whoever is writing code in that container. But what about statefulness and Layer 4-7 services? How do they get inserted if they use those embedded channels?"
"Their whole value is integration with orchestration," he said. "In order to make this work, you have to be able to tie in all the extra network function pieces with the regular application orchestration. Nuage, VMware NSX or PlumGRID have the ability to make that interconnect."