Distributed Storage Goes Mainstream

Distributed storage offers more capability than traditional scale-out solutions. They scale further, are more granular and have a better multi-site/multi-cloud model. But distributed systems are typically viewed as the storage system for the next generation data center, and while they are ideal for that use case they are also an excellent solution for the mainstream data center.

Distributed Storage vs. Scale-Out Storage

Distributed storage is different from scale-out storage. Scale-out storage systems are often single site solutions that solve the problems with scale-up storage, namely the ability to scale capacity and performance without introducing multiple points of management. Distributed storage is the natural evolution of scale-out.

Where a scale-out system is measured in dozens of nodes, a distributed is measured in hundreds, if not thousands, of nodes. Another difference is how data is replicated or distributed across multiple sites. While a scale-out system can replicate data, the relationship is active-passive. Distributed systems replicate data in real time and all sites are active. Finally, distributed systems can scale more flexibly than traditional scale-out, where the typical node in scale-out architectures comes with both capacity and compute. A distributed architecture can scale just capacity or compute.

The Mainstream Use Case

What data center wouldn’t want to take advantage of a storage system that can scale further, across locations and clouds and do so less expensively? But that solution needs to meet the requirements of the mainstream data centers. And today, data centers continue to face two big challenges. The first is how to provide a cost effective but adequately performing storage architecture for their virtualized infrastructures. The second is data protection. How do they store and manage all the backup and copy data their organization creates?

Instead of addressing these more pressing needs, most distributed, next generation, storage systems are focusing on solving problems like supporting Docker Containers, or creating a multi-cloud fabric. Those are important problems to solve but why can’t these next generation systems also help IT with these more immediate problems.

Hedvig Distributed Storage Platform 3.0

Hedvig is a distributed storage system that provides next generation data centers with a software defined storage solution that addresses problems like multi-cloud connectivity and creating developer clouds. But in the 3.0 release of its solution, Hedvig is adding the capability to address more mainstream problems.

For VMware and other hypervisors, Hedvig provides a high-performance NFS based solution ideal for hosting virtual machines. New in the 3.0 release is Hedvig’s FlashFabric technology, which leverage server-side flash caching instead of replacing it. The solution automatically and dynamically tiers data up into a server-side cache and demotes less active data to lower cost storage. FlashFabric can also move between two different performance tiers of flash. For example, between a higher performance PCIe flash and lower performance SAS or SATA flash.

Since the solution is software defined, it is ready for NVMe and 3D Xpoint. While NFS is nothing new for Hedvig, the addition of their FlashFabric technology allows for the support of a larger number of virtualized workloads. For VMware customers who like the concept of NFS hosted VMware images, Hedvig is a logical upgrade to their current NFS solution.

IT professionals also struggle with designing storage architectures for data under protection. The secondary data set is growing at 10-20X the pace of the primary data set, so the need for reliable, cost effective and scalable storage is high. Hedvig can provide this capability. But instead of requiring the data center to convert to a new data protection solution, it is able to leverage existing software. The first example of this is 3.0’s support for Veritas’ OpenStorage Technology plugin. It ensures that existing Veritas NetBackup customers can seamlessly connect to a Hedvig system as a deduplicating backup target.

Hedvig also is enhancing its VMware plugin. The new plug-in adds backup capabilities to the VMware vSphere Web Client and is now VMware-ready storage certified. Now customers using Hedvig for primary storage of their VMs can back up those same VMs through vSphere native functionality.

Finally, the 3.0 release also brings an encryption feature to the solution. The feature is Encrypt360. It provides native, in-software encryption that starts at the host to protect data in-use, in-flight and at-rest. It is enabled per vDisk, leverages 256-bit AES, offloads to Intel and AMD processors, and works with deduplication.

StorageSwiss Take

Moving Distributed Storage mainstream makes sense. As data centers lay out plans for the future, they have real problems to solve today, which forces them to buy legacy storage solutions that may not take them into the future. Providing IT with the ability to solve today’s problems with tomorrow’s storage architectures provides the best of both worlds.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,223 other followers

Blog Stats
%d bloggers like this: