Storage Problems Limit Hyperconverged Scale

Hyperconvergence has a real problem. It can’t scale. While hyperconverged architectures may be a good match for medium sized business, as the environment scales it’s “all for one, one for all” simplicity becomes problematic. As is usually the case, storage is the culprit.

While there are variations on the theme, a typical HCI environment combines storage, networking and compute into a single node. Those nodes are clustered together and data is replicated or erasure coded to storage in other nodes. The copying of data enables the movement of virtual machines between nodes and it provide protection from a storage media failure. Many virtualized infrastructures, at scale, already have an east-west network traffic concern, adding storage to that same path further compounds it.

The Replication Challenge

Replicating data between nodes is computationally less expensive than erasure coding, but is dramatically more expensive from a capacity standpoint. Most replication designs suggest three instances of data to provide enterprise-class reliability. For an organization with 10TBs another 20TB for protection and VM mobility can be reasonably justifiable. For an organization with 75TBs of data, purchasing another 150TBs of storage is much harder to justify and it is very likely that the organization will be buying extra nodes just for capacity to store the protected copies, wasting compute resources.

The Erasure Coding Challenge

Erasure coded HCI designs combat the space allocation problem by storing data in a RAID-like fashion across the nodes. Each write is segmented and each segment is sent to a node in the cluster, then a parity bit is generated and stored on a node. There is a computational concern with this design, as the node receiving the write has to do the segmentation and calculation of the parity bit. In addition to the computational impact, there is also a more severe network impact since every read and write requires that each node (or a high number of them) send data across the network. This in turn requires a re-architecting of the network for deployments with any reasonable scale.

The Server Down Challenge

In either design if there is a server failure or the need to take down a server for maintenance the storage goes with it. If there is a media failure, the response varies by HCI vendor. Some vendors do an internal per-node RAID, so a media failure won’t impact compute. But that internal RAID costs capacity and potentially additional compute to calculate; yet another parity bit. Others don’t provide any internal media protection, a media failure is essentially a node failure at that point. An evacuation of server data can protect the HCI cluster from unplanned failure. However the evacuation, rebuild and rebalancing activity exacerbate an already busy network, and can add hours to the simplest server patch activity.

Opening up Hyperconverged Architectures

Organizations requiring rack-scale virtualized infrastructures will want to direct some of their traffic north-south. The key is to get the locality advantages of in-node storage without flooding the network. Another key is not to use a sledgehammer to cut a watermelon. The problem is a storage problem; it does not require a completely new set of servers.

Instead, the storage solution (IO processing, data services) should load onto existing virtualization hosts, leverage less expensive internal server flash for active data, yet send writes to shared durable capacity on a north-south network connection. That combination provides local, low latency performance for active data, while protecting data from loss via a shared storage appliance.

Watch On Demand

It also lowers east-west network utilization by eliminating server-to-server chatter for data protection, which enables better and more predictable scaling of the virtualization environment.

To learn more about the problems with hyperconverged architectures and how to fix them, watch our on demand webinar, “Hyperconvergence is Broken, Learn How to Fix it!

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,767 other followers

Blog Stats
  • 1,081,727 views
%d bloggers like this: