The cloud era has led to a slew of cost-savings advantages for businesses of all sizes, but as the technology grows, it demands center networks and storage that can match its magnitude. Traditional approaches to networking in a cloud environment have become too complex and costly. But software-defined networking (SDN) and software-defined data center (SDDC) technology will address these challenges.
In the ultimate SDDC networking scenario, IT teams could turn networking and storage into simple workloads that live in software stacks on bare-metal servers. IT managers could use hypervisors to deploy any workloads, ranging from networking load balancing to monitoring, management or other applications. The premise behind the SDDC is to eliminate both costly dedicated hardware as well as isolated islands of compute that are difficult to manage. SDDC aims for a flexible pool of resources that can be centrally controlled.
Ultimately, IT teams will move network and other infrastructure workloads to optimized servers. But it's not enough to simply take an x86 server and add network interface cards (NICs). IT teams will want to deploy a common infrastructure for 90% to 95% of their workloads and install special software to run this hardware. This will help avoid the problem of isolated racks of compute that are expensive to manage and difficult to track. An example of this scenario will be in network functions virtualization (NFV) where telecom carriers will deploy all kinds of network functions on bare-metal servers, optimizing and centrally managing those workloads. Software packs, for example, are available in the market to optimize high performance packet processing in these environments.
SDDC meets storage flash hardware and caching software
Flash is beneficial to the SDDC because it provides a centralized storage resource that can be shared and configured on demand for the workloads via software. For example, flash helps improve system performance for online transaction processing (OLTP) by seven times. Flash is important because it helps to build out more optimized and distributed workloads in an SDDC.
But when workloads are distributed for high-availability purposes, this can turn the storage fabric into a bottleneck for performance and coordination. To alleviate this, the storage fabric needs to be optimized for the dramatic new capabilities and latencies of flash, and the intelligence coordinated for diverse caching. Again, that's where the centralized management enabled by SDN and the SDDC come in.
Where do SDDC networking and storage go from here?
Storage has been optimized and virtualized for some time, but the missing component has been the network. Now as the network becomes virtualized, IT teams will work to bring together storage resources with the new network on servers that includes load balancing, intrusion detection, firewalls and routers. Together these components will be managed in the new SDN paradigm.
Within this, the market is moving toward a standard server that will run various workloads. Using the OpenCompute framework, OEMs are looking to move away from boutique solutions to standard solutions in which customers can refresh their workloads onto standardized servers with software stacks instead of hardware to more easily optimize for those specific workloads.
Ultimately this will lead to operational cost savings. An early example of this will be the telecom network equipment space, which is moving to SDDC and is already able to reduce costs by 80% or more for hardware and operations. Public and private cloud data centers are poised to see the same kind of savings.
About the author:
Brandon Hoff, director of product management at Emulex, focuses on innovation and cloud technologies to enable customized solutions for cloud providers and innovative solutions for enterprise users. Hoff has focused his career on designing, developing and applying industry-shifting technologies to solving customer problems.