Guide to software-defined everything in the data center
A comprehensive collection of articles, videos and more, hand-picked by our editors
With all the hype around software defined storage (SDS), it can be hard to tell which of the many technology vendors claiming the term clearly fall into this category. But experts agree that true SDS technologies share a central goal -- the emergence of a non-proprietary system that is software-driven and enables automated, flexible provisioning of storage resources.
Essentially, SDS is emerging as an ecosystem of products that decouples software from underlying storage networking hardware, placing it in a centralized controller or hypervisor. This centralized software will provide visibility of all physical and virtual resources, enabling programmability and automated provisioning based upon consumption or need. The software can live on a server or be part of an operating system or hypervisor, but it will no longer be firmware on a hardware device. It can also control hardware from multiple vendors, enabling engineers to build non-proprietary environments.
SDS solves some of the challenges that organizations face today: the ability to scale infinitely, the ability to use commodity hardware rather than being locked into a single vendor and the ability to do all this very cost effectively, said Debbie Moynihan, VP of Marketing at storage vendor Inktank.
"I view SDS as 'user-defined storage' -- you can implement the type of storage you need and scale the way you want on any hardware of your choice," she said.
What's driving software-defined storage?
The idea of controlling storage with non-proprietary software is not new. For nearly a decade, the Ceph open source project has been developing software that allows anyone to build massively-scalable unified storage with commodity hardware.
I view SDS as ‘user-defined storage’ -- you can implement the type of storage you need and scale the way you want on any hardware of your choice.
VP of Marketing, Inktank
But the SDS movement has gained momentum with growing interest in software-defined networking (SDN) and the software-defined data center (SDDC). These technologies are a direct reaction to the old model of building infrastructure, which depended on reliable hardware with unreliable software, explained Moynihan. That model turned out to be an expensive and inflexible way to run IT.
"Historically, storage solutions were sold and developed as hardware appliances. The traditional approach of installing storage hardware appliances doesn't scale for the cloud," she explained.
The essential appeal of SDS is scalability and flexibility. "The software-defined infrastructure shift, which is reflected in the way scale-out companies like Google and Amazon do things, has proved the reverse model -- reliable software on unreliable hardware is easier to deploy and manage, and it's cheaper," she explained.
SDS, on the other hand, was conceived with cloud environments in mind. "Software-defined storage was designed to scale-out to thousands of nodes and to support multi-petabytes of data, which will be the norm as the amount of stored data continues to grow exponentially and as more and more storage moves to the cloud," Moynihan said.
SDS use cases: A non-proprietary technology with centralized control
Beyond scalability, SDS technology aims for freedom from the limits of proprietary hardware, explained StorageIO analyst Greg Schultz.
Steve Houck, COO of DataCore, a company that provides SDS and storage virtualization software, agrees with the vision of vendor independence. "The greatest value to the customer [of SDS] is a capability that lets them work with any hardware or hypervisor," he said. Hardware-oriented vendors that lump their software capabilities under the term SDS but retain proprietary solutions only confuse the market.
DataCore offers offers a storage virtualization layer that centralizes management and enables programmability and provisioning across multi-vendor hardware. Rich Conway senior technologist for cloud services at Amnet Technology Solutions, an IT and AV technology service provider, views SDS as a "software layer that sits between physical disks and the servers that are reading and writing." His organization has adopted DataCore SANsymphony storage virtualization software and depends heavily on the mirroring it provides. "We have two complete copies of our data. The server only sees it as one copy, but SANsymphony actually operates two complete SANs, so that any failures or maintenance tasks are invisible to the server," he explained.
SANsymphony also enables auto-tiering, which Conway said makes the most efficient use of his storage assets. "You get the benefit of your fastest disk without paying too much -- the SQL database will deploy to the fast disk, while something else like backup files won't reside on the same physical disk, even though it is on the same LUN. That would be hard to do with a conventional SAN," he said.
Ryan Makamson, Systems Engineer for Washington State University's School of Electrical Engineering and Computer Science, currently employs a VM-aware storage appliance from Tintri. SDS means visibility from a central console and a more flexible approach to tiering and management, he said.
Tintri's VM storage appliance uses virtualization to provision flexible storage space and can house complex databases at varying levels of service quality depending on need, for example. All of this is controlled by a centralized management software. "We used to have to manage a very structured and tiered infrastructure that could support database users and application users, but now virtualization puts an umbrella over that. Virtualized administration makes us more empowered to manage and consume those resources," he said. In fact, "Tintri separates consumption from administration," he added. The input/output can be provided on based upon consumption.
All of these use cases reflect an interest by organizations in achieving cost-effective, scalable storage solutions that run on commodity hardware. SDS achieves that, according to Inktank's Moynihan, by offering fundamental architectural components, such as intelligent data placement algorithms, no single point of failure and the flexibility to support a wide range of implementations. "Ceph, for example, is integrated with several popular cloud platforms, has a native programming interface and can support object, block and file system storage in a single cluster. This type of flexibility is unheard of in the traditional storage world," she noted.
Enter software-defined data center and storage
What will VARs do with software-defined storage?
Future of the software-defined data center
Rise of the storage hypervisor
"We believe that the storage solutions that people should be buying today need to be designed for the future. Ceph was developed as an intelligent object store at its core, but it can be used for multi-purpose storage -- block, object and file system -- in a single cluster," she added.
With so many vendors now claiming SDS functionality for such varied technology, it can be difficult for buyers to make investment decisions, said Schultz. He advises, "Don't be confused, but do ask questions. If a vendor comes in and says they are software-defined, find out what they mean and what they are doing," he explained. Additionally, users should ask if the technology is tied to VMware or Hyper-V or the cloud and whether it can be used in their current environments.
What's better for me: Custom vs. commodity hardware?