Any new technology or iteration of an existing one is expected to make things faster, cheaper or otherwise reduce...
operational expenditures. Server virtualization has led to these outcomes in spades, and now the benefits of network virtualization are attracting increasing levels of interest. However, some recent projects have demonstrated that the case for virtualization is not necessarily cut and dried.
In my experience as a security and IT consultant, I have seen customers -- following the zeitgeist -- invest in security virtualization, collapsing many services into a single platform. This led to significant power savings and a reduction in maintenance costs. All good: The choir invisible sang from on high about how the new technology made everything better.
Except in some cases, it didn't work -- at least not at first.
This is what I mean. In one instance, not long after going live with a virtualized framework, a customer ran into a serious platform-level bug. The specifics are not important, but the impact was. Pre-virtualization, such an issue would have limited scope, but a cascade failure in the shared platform took many business units offline. The issue proved difficult to troubleshoot and several patches were required before stability was established. But the damage was done. At the executive level, faith in the benefits of network virtualization had been lost. As a result, a network refresh was proposed; thus began my involvement. The plan was to rip and replace and, in the process, reduce the organization's reliance on shared physical infrastructure.
This is not the first time I've witnessed something like this occur. I have seen several cases where customers have backed away from virtualized network functions (VNF) in favor of relatively conventional network designs. On paper, running VNFs on distributed clusters should provide attractive cost savings. However, I've seen how it can dramatically increase the complexity of systems, especially when it comes to monitoring and management.
If not careful, virtualized systems can impact other operations
At the core of all virtualization lies a compromise that users can mitigate but never totally resolve. Virtualized systems share physical resources, and even with resource protection, scheduling and other "soft" controls in place, virtualized systems can still negatively impact each other. For the most part they don't, and with attentive system management, many systems can share the same hardware. For the vast majority of end users, sharing resources will deliver operational cost reduction.
Server, network and security virtualization technologies all share an Achilles' heel: the bits of the system that are common to every node, switch or virtual instance. This can be the hypervisor, a shared control plane or clustering protocol. The network/server/security estate depends on these services to operate. And that's fine, because they are totally reliable right up to the point when they are not.
Remember the two immutable facts of IT operations: There will be bugs and there will be patches (third and fourth on the list after death and taxes). If you are lucky, there will be few of either. If you are really lucky, the cause and effect will be the correct way around. Hardware and software vendors have made improvements in rolling upgrades and hitless failover, but inevitably, sometimes this stuff goes wrong. In the case of the customer above, the problem was traced to a memory leak -- the sort of thing that can (and does) happen to any vendor. But once the decision had been made, we had to put the plan into action.
Making virtual links physical again
The short-term impact of transitioning the network was predictable: A lot of copper and rack space was needed to make virtual links physical again. The engineering challenges of a big fancy chassis aside, a lot of parallel processing is available for peanuts. The total infrastructure capacity was actually increased over the "state of the art" one it replaced. However, with many more processors and interfaces, tracking a single packet through the infrastructure became a significant task.
In a virtualized environment, a single cluster usually equates to a single management interface. In the physical world, the deployment of dozens of distinct management interfaces created a significant administrative headache. With element management tools, it's possible to create policies that span the physical infrastructure, but this doesn't completely address the management plane itself. As an example, a small change to the administration role-based access control involved creating a change request that touched 80 devices. To address these template-type issues, the obvious answer is to use automation tools. However, given that the organizational brass shied away from a "mature" technology such as virtualization, take a guess at their reaction to NetOps-style systems management.
In the meantime, multiple smaller issues replaced the user's large problems; the customer opted to fight 100 duck-sized horses rather than a single horse-sized duck. There is no doubt this company swam against the tide by eschewing the benefits of network virtualization; but in this case, availability trumped (almost) every other concern. One doesn't have to be crazy to eschew virtualization, but a level of single-mindedness is required. And one must have the willpower to accept that there will be a lot of hardware and software sitting idle, and possibly many small horses with which to contend.
Implementation of virtual network functions can be challenging
Pros and cons of virtual network appliances
Hybrid approach to network virtualization avoids rip-and-replace