Hyper-Converged Architecture vs. Software Defined Storage

Hyper-Converged Architectures (HCA) and Software Defined Storage (SDS) are two of the most talked about trends in the data center, and both are gaining traction. For IT planners sifting through the deluge of vendor claims about these two technologies to determine which is best for their organizations is difficult.

Hyper-converged has SDS in its DNA

A core component of a HCA is SDS. Most HCA solutions are scale-out SDS solutions that run on each node in a hypervisor cluster. They then aggregate the storage in each node to create a shared storage pool accessible by all the virtual machines in the cluster. The value of HCA is that it eliminates a specialized storage network and significantly reduces the cost of storage, since a dedicated shared storage system is not needed. Like any other SDS solution, these HCA solutions then provide most of the necessary storage services like snapshots and cloning but many are missing key services like data protection and replication. For small organizations, a HCA may be all the business needs.

The Hyper-converged Challenges

HCA does have its drawbacks however. The first challenge facing HCA solutions is that few of these systems can support external legacy storage that may already be in the data center. HCA assumes that old legacy storage is replaced by shared virtual volume it creates. Few data centers do that. Instead, they end up having to support HCA as yet another storage silo in the ongoing battle against storage sprawl.

There is a constant concern over storage performance in the data center. The highest of these performance concerns is predictability. Consistent performance that applications and users can count on is the most important attribute for a storage system to deliver. The second challenge with HCA is how it may put predictability at risk. This performance risk comes from the shared everything nature of the implementation. The storage software that is at the heart of the HCA solution is sharing its processing power, memory and network with all the other processes in the hypervisor cluster. A “run” on one of those resources caused by an application spike could impact storage performance, creating a ripple effect throughout the infrastructure. The larger the data center, the more real this concern becomes. A separate storage infrastructure has the advantage of dedicated processors and networking, helping it to deliver predictable performance.

Third, there is a challenge of scale. The HCA design, to some extent, has scaling built in. As the data center needs additional compute, it stands to reason that they will add physical servers and make sure it has capacity internal to it. The existing storage volume then adds this new capacity to the aggregate. There are two downsides to this type of scaling. First, the addition of compute, storage performance and storage capacity are tightly integrated. The need to expand these resources is rarely in lockstep. Second, as the number of nodes contributing storage to the virtualized volume grows, the importance of the network increases. At scale, when the node count is in the double digits, the shared everything architecture of the HCA can become complex.

The Software Defined Advantage

SDS, when not in its hyper-converged state, offers a viable alternative for larger environments looking to reduce storage costs but maintain flexible scaling. As Storage Switzerland detailed in its article “The Three Problems with Software Defined Storage” SDS solutions are available that can provide SDS values to legacy storage already in the environment. These values include a single point of management and unified storage feature sets. Additionally, because they leverage existing storage architectures, they take full advantage of dedicated storage networking and storage compute. In other words, they bring the management simplicity of a shared everything environment without risking predictable performance or forcing a re-purchase of additional storage.

Conclusion

HCA solutions certainly have value to some data centers and the solutions are implemented in a variety of ways. In general there is a reason for concern when initially implementing these solutions as they may create a new silo of storage. If the data center has any sizable investment in legacy storage, the investment in HCA may not be that appealing after all. As the environment scales, HCA has some specific weaknesses in terms of flexibility of upgrades and providing predictable performance. Given these concerns, traditional data centers with an existing investment in storage systems and with a need to provide predictable performance may want to consider an SDS solution that is not hyper-converged.

Sponsored by FalconStor

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,604 other followers

Blog Stats
  • 924,223 views
%d bloggers like this: