What is Software Defined Storage 2.0?

Software defined storage (SDS) has been a common term in the data center for several years now. Its definition, for the most part is very similar to its predecessor, storage virtualization. Both concepts attempted to abstract data services from the storage hardware so that the storage software decision can be made separately from the underlying hardware. While these capabilities are a significant step forward in terms of flexibility and cost reduction, they do little to make the data center more cloud-like. The transition to a cloud-like data center requires automation, something that IT professionals will expect SDS 2.0 to deliver.

What’s Right About Software Defined Storage 1.0?

SDS 1.0 has a lot of things going for it. It breaks the tie to hardware that plagued most storage virtualization solutions. Instead of requiring a dedicated, purpose-built hardware appliance, SDS solutions can run on off the shelf, as white box servers and even as virtual machines. They also advanced the feature set of storage by providing deduplication, compression and data tiering.

What’s Wrong with Software Defined Storage 1.0?

SDS 1.0 for the most part adheres to the legacy storage architecture of LUNS and volumes. These solutions lack a sub-volume-level of understanding of the data for most of their software features. This means that capabilities like snapshots and data tiering have to occur at the volume level not at the application or virtual machine level.

Quality of Service (QoS), if it is available, also is typically limited to a specific volume. This means that if a storage or application administrator wants to alter the current QoS setting of an application or virtual machine it needs to be migrated to another volume. The volume cannot adjust to the needs of the VM.

SDS 1.0 also tends to entirely replace the software services that are available on the storage system. In other words, SDS 1.0 means that the organization is buying the feature twice. Once when it is “included” with the hardware, and again with the SDS solution. The justifications for this “double-buy” are that the IT professional can now manage storage through a single pane of glass and that future storage hardware can be purchased without these services. In reality it is hard to find a storage system without some form of data services.

Finally, most SDS architectures are dependent on a single- or dual-controller architecture. This limits the system’s ability to scale and limits availability. These are critical features for the SDS 1.0 design since it proposes to replace all data services. If these nodes fail all services stop.

Software Defined Storage 2.0

SDS 2.0 should provide a deeper granularity than the volume or LUN. The SDS 2.0 solution should have an awareness of the virtual machine and/or database constructs that are operating within it and it should allow QoS parameters to be set against those constructs. But changes in QoS should not necessarily cause a migration of the dataset to a new volume, but instead the storage that surrounds the data should change.

For example, if a move from bronze to silver is requested then the flash allocation to that dataset should be transparently increased. Subsequently if the priority of the application is raised to gold, then the flash allocation may actually be larger than the hard disk allocation, almost eliminating access from non-flash media. Further, if an upgrade of an application’s QoS occurs once more, to platinum, then its dataset is 100% allocated from flash, eliminating any non-flash media access.

3792 IOfabrics SmallSDS 2.0 tiers should not be limited to flash and hard disks. They should leverage DRAM as another tier of storage that can be allocated to these various QoS types, allowing for even greater storage performance prioritization.

QoS is also not limited to performance. Another QoS parameter could be set for data protection levels. For business critical data, a QoS setting could require that data be asynchronously copied to a second, independent storage system creating a real-time backup. For mission critical data, a QoS setting could require a synchronous copy of data be made to a second system.

Another data protection capability that SDS 2.0 should offer is limiting the size of any given volume. The reason for this is that if a volume fails, all the data on that volume has to be recovered. The smaller the volume the less applications are impacted by its loss. But because there is a constant risk of running out of capacity on a volume, managing and keeping volume size small becomes problematic. As a result most SDS 1.0 data centers create few very large volumes to simplify management. SDS 2.0 should allow a volume size limit setting that is automatically enforced. When a volume approaches a pre-specified watermark, data sets are copied off of that volume to another volume.

Finally, SDS 2.0 solutions should be built on a distributed model, similar to the hyper-converged and web-scale architectures that the compute tier enjoys. This could be done by deploying agents within the physical servers or virtual machines that can scan all the available storage resources. LUN and volume management should be done in the background by the SDS 2.0 solution. Storage administration should be as simple as assigning capacity, performance QoS and data protection requirements. From there the SDS 2.0 solution should automatically graft data to each individual data set. This architecture allows storage policies to scale across storage systems, in a shared-nothing model.

Conclusion

SDS 1.0 was an important step forward for storage management, as it opened up choice within the data center while bringing some semblance of centralized management. SDS 2.0 takes the next step by eliminating the volume/LUN construct and turning storage allocation into a simple assignment of available capacity, performance and data protection capabilities. The above is just the beginning for SDS 2.0 and more should be expected from the next generation of. The eventual goal of SDS should be that IT professionals no longer worry about storage details, instead they will simply assign capacity and performance expectations to applications and let the storage infrastructure automatically adapt in the background.

Sponsored by ioFABRIC

About ioFABRIC

ioFABRIC is an example of a company on the cutting edge of delivering solutions that deliver on the Software Defined Storage 2.0 promise. Their Vicinity Software is a type of storage virtualization software designed to meet the performance and economic challenges of the new software defined data center. ioFABRIC is looking for beta customers and storage industry
partners, contact them to get involved at info@iofabric.com.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,229 other followers

Blog Stats
  • 1,541,647 views
%d bloggers like this: