NFV basics: A guide to NFV implementation, challenges and benefits
A comprehensive collection of articles, videos and more, hand-picked by our editors
For decades, networks and network services have been built by linking purpose-built hardware through communications trunks, then inducing cooperative behavior among these elements using management systems. The process worked well for traditional services, but it is breaking down in the Internet age as operators look to reduce the cost of their infrastructure to accommodate lower revenue per bit. Now, those operators want their own flexible platforms for building new revenue-generating services because they are competing with handset and over-the-top players. Network functions virtualization (NFV) is the initiative that embodies both these goals.
Currently being considered by an Industry Specification Group within the European Telecommunications Standards Institute (ETSI), as well as other industry groups including the Open Networking Foundation, NFV aims to move network functions out of network equipment and onto virtual hosting. These functions range from simple firewalls, network address translation and load balancing to more complex processes, such as IP Multimedia Subsystem and the Evolved Packet Core framework.
If it's successfully developed and implemented, NFV will completely revolutionize how networks are built, managed and used to create services. It's unlikely that virtual hosts will totally displace network equipment (data-plane tasks like large-scale packet forwarding will likely require specialized devices, though they could be simpler and cheaper), but the high-value portion of network services could be made into a series of interoperating, cloud-hosted components. In this context, virtual devices could be used to accomplish special missions, and they could be as easy to build as a componentized application.
Network functions virtualization depends on the proper execution of four steps:
- Existing network functions have to be componentized and broken into virtual functions with standard interfaces. It's likely that the points of virtual function componentization will at first align with existing device boundaries and with standard component descriptions generated by such standards groups as the 3rd Generation Partnership Project or the Internet Engineering Task Force.
- These new virtual functions will have to be structured to be run as applications in the cloud, with virtual network interfaces instead of the physical connectors that serve device networks.
- A deployment process, something resembling DevOps, will be needed to instantiate virtual functions and connect them into cooperating systems as we used to do with devices.
- We'll need a new vision of management to handle a world where major pieces of the network are virtual software elements sharing servers, not real devices with real management interfaces to control their behavior.
Moving from hardware-based functions to virtual ones
Some network functions already are available in the form of open source or proprietary software, but much is embedded in hardware. To be turned into virtual components, hardware functions have to be separated from the specific hardware on which they run, which might leave some elemental device with basic features behind -- similar to what could happen with software-defined networking, or SDN.
For NFV to work, it will have to work with standardized components that obey defined and interoperable interfaces. It also will have to work with proprietary components created by vendors to support cloud capabilities. For example, engineers will have to be able to create multiple instances of a function for improved reliability or performance. Open source packages or software-based implementations of vendor-specific routers, switches, firewalls and other elements could be suitable components in their current forms.
More on virtualizing the network
SDN vs. network virtualization: Martin Casado explains the difference
Integrating virtual and physical networks
Is Network as a Service a reality??
To make a component into a virtual function, you'll have to convert it into a standard form suitable for deployment on a virtual resource pool, such as the cloud. Obviously there are no physical plugs and sockets on a virtual function, so every real interface will have to be replaced with a virtual one, such as an Open vSwitch or other similar product. This would then manage the connections and data flows among the virtual functions when they deploy. At this point, some standardization of the server requirements will likely be needed to ensure that every virtual function doesn't need its own specialized hosting platform features.
These newly componentized and standardized virtual functions then will have to be deployed to create services, much as service-oriented architecture components are deployed and integrated to build applications. In the cloud, this could be a function of cloud-management application programming interfaces like Nova and Quantum in OpenStack, or deployment could be managed through a DevOps process, using something like Chef or Puppet. For NFV to provide reasonable operating costs and agile deployment, some automated mechanism is indicated, and the NFV activity could elect to identify something specific or only to provide the means of integrating a variety of open source or commercial tools to do the job.
NFV management challenges
Management is the most challenging aspect of NFV. It's true that NFV must somehow support the current network management, network operations and the OSS/BSS processesOoSperations and business support systemsS/BSS processes processes of operators, but it's also true that doing this probably is not as simple as providing a virtual management interface for each virtual function. The actual resources that host and connect virtual functions are outside the functions, so it isn't likely they'll be fully visible to current management apps. Furthermore, they are shared resources, so no management interface can be allowed to do something to a virtual function that would have a negative impact on other functions that might share some of the resources. Management changes must also preserve network stability and security. For all these reasons, it could be that the management of virtual functions will itself be virtual, provided by some external process that relates virtual functions and their resources to a series of management interfaces that can support both current and future management activities.
What will it take to create NFV standards?
To address all these steps with new standards would be a task so complex it's doubtful it could be completed in a decade. It's therefore likely that ETSI and associated organizations will develop NFV using current standards from other bodies rather than creating new ones. Where no standard is suitable, they'd recommend changes to the most relevant standard available, working through the standards body that "owned" that particular standard. Any standard that currently defines the internetworking of real devices could be used to define virtual functions. For example, virtualization and cloud management standards could describe how virtual functions are actually hosted.
Network functions virtualization is an incredibly ambitious effort, but it's driven by a growing number of vendors and operators who want to create a profitable, stable and agile network framework for future services. It's already influencing network design for operators and product planning for vendors. As it matures, its effects will surely be felt even in enterprise networking. The union of the network and IT that the cloud represents could be built on the architecture that NFV defines in the near future. If so, it will shape every enterprise and consumer service delivered through the network.
About the author
Tom Nolle is president of CIMI Corp., a strategic consulting firm specializing in telecom and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecom strategy issues.