Understanding the Problem of Unification in Most Software-Defined Storage Solutions

The promise of software-defined storage (SDS) aims to unify all of your organization’s storage assets through a single user interface. But that interface is supposed to do more than provide a single pane of glass for monitoring. It is also supposed to unify storage management functions like provisioning, snapshots, and data protection. While SDS sounds good on paper, most SDS vendors have fallen well short of the original promise of unification.

SDS per Workload is Not Unification

Workloads have varying storage performance needs. Some workloads need extreme transactional IO, while others need high bandwidth sequential IO, and still others need moderate performance on mixed IO. And some workloads don’t need high performance at all. Instead, they need access to inexpensive long-term storage.

It makes sense to use different types of hardware for each of these IO profiles. Some will benefit from Non-Volatile Memory Express (NVMe) solid state disk (SSD), while others will benefit from parallel file systems and still others, from high-capacity hard disk storage. The software that drives these different storage hardware types though should remain consistent. The IT administrator should use the same software and the same command set, to provision and protect these workloads.

The problem is most SDS solutions only work on one type of hardware or can’t intermix a variety of equipment. Also, most vendors optimize their SDS solutions precisely for one of these workloads. As a result, the organization ends up with an individual SDS solution per workload and is not much better off than buying a purpose-built solution.

SDS per Protocol is Not Unification

Another challenge is that most SDS solutions only support one type of protocol. The modern data center’s mixture of workloads leads to a combination of protocols. Some run better on block storage (fibre channel or iSCSI), while others run better on a custom file system, and still, others need NFS or SMB. Again, the organization ends up with an individual SDS solution per-protocol type.

SDS per Storage Media Type is Not Unification

Some SDS solutions even go so far as only supporting one media type. A typical example is SDS solutions that only support all-flash. While the all-flash data center is a noble goal, it is not practical for most organizations. The high-capacity hard disk remains a less expensive way to store data. Given the right types of data protection and data durability, they can store that data securely for a very long time. As a result, the organization ends up with an individual SDS solution for each type of storage media and data retention requirement.

SDS per Deployment Type is Not Unification

The deployment type is one of the most prevalent divisions within SDS. Some SDS vendors design their solutions to work only in a hyperconverged infrastructure, and others create their solutions to run only in a single bare metal configuration. The bare metal solutions can sometimes run as a VM within a virtualized environment but can’t leverage multiple nodes. There are also a growing number of SDS solutions designed for containerized environments, but those solutions don’t work in virtualized or bare metal configurations. It’s a reality that most data centers will always have some bare metal workloads and a large number of virtualized workloads. At the same time, many organizations are continuing to progress to containerized Dev/Ops environments. Having a workload for each deployment type is impractical and makes migration to the new type more difficult.

What IT Needs from SDS

IT needs a single SDS solution, not a dozen. It requires a single software solution that runs a variety of workload types, and protocols without limiting protocol choice. It also needs to run within a range of deployment types so that migrating workloads to the new deployment types are manageable.

DataCore is a veteran of the SDS marketplace. Recently their Senior Director of Product Management, Steven Hunt, joined me on our Lightboard to discuss how DataCore is attempting to meet these challenges. See the discussion here:

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,227 views
%d bloggers like this: