The HCI Resource Efficiency Problem

As IT attempts to scale Hyperconverged Infrastructure (HCI), they realize the impact of the HCI resource efficiency problem. The selling point of HCI solutions is that they scale by “just adding a node.” Each node brings a fixed amount of processing power, networking capabilities, storage performance, and capacity. That is the problem. While every data center eventually needs to scale these resources, no data center scales them in unison. The result is highly inefficient resource utilization, making HCI even more expensive over time than it is upfront.

Workarounds for the HCI Resource Efficiency Problem

The Separate HCI Instances Workaround

One potential workaround for the HCI resource inefficiency problem is implementing multiple instances of the HCI solution. You could have nodes configured in one instance to deliver more processing power and storage performance. You could have different nodes configured in another instance to deliver more storage capacity. And another instance could have nodes with graphics processing units (GPU) configured to service analytics or machine learning applications.

The first problem with this workaround is rather apparent. You have at least three instances to manage, not one. One converged infrastructure is the whole point of HCI. The second problem is that these resources are now siloed. If a workload attached to your high-performance instances needs access to more capacity, you can’t quickly provision space on your capacity-centric instance. That is not efficient and doesn’t solve the HCI resource efficiency problem.

The third problem is the long-term viability of these HCI instances. What if a new higher-performance CPU type comes to market, faster NVMe flash or storage class memory, higher-density hard drives, or more powerful GPUs? Hardware innovations are a constant. Being able to utilize those innovations can help you drive down costs, respond more quickly to customer demands or get a competitive advantage.

For each innovation, with standard HCI, IT will need to set up a new instance, migrate workloads and data, and add yet another point of management. Ultimately, the approach creates HCI silos. If you recall, eliminating silos was supposed to be the reason for moving to HCI from the legacy three-tier architecture.

The Capacity Nodes Workaround

The most common resource that data centers need more of is storage capacity. To store more data, IT needs to buy additional nodes, each with a full complement of processing power and networking, just to get more capacity. In response, some HCI vendors offer “capacity nodes” to meet this demand without ending up with excess processing power.

Capacity nodes themselves are not a problem. The HCI problem with capacity nodes is one of data layout and data efficiency. Many HCI vendors offer erasure coding as a form of protection from drive failure in addition to mirroring or replication. Erasure coding delivers better capacity utilization than mirroring but only about 20% better once all the math is factored in. For that 20%, you create a data layout strategy with a hefty burden on a scale-out infrastructure.

Erasure coding also forces vendors to use nodes of similar capacities, preferably with the same-sized drives and is part of the reason for node similarity. If you start to mix in capacity nodes, you are now asking the erasure code to run twice. Once for the standard nodes and once for the capacity nodes. The overhead severely impacts performance. In several HCI vendors’ support documents, we’ve seen recommendations to turn off erasure coding and deduplication if the customer wants to use capacity nodes.

Let that marinate for a moment. You are adding capacity nodes because you have a lot of data, but you should turn off two other data efficiency methods if you add them. You can also create a separate instance of just capacity nodes (see above). The exploding head emoticon is in order here.

Both of these workarounds have significant cost ramifications and make it difficult for customers to extract the full value of scale.

Solving the HCI Efficiency Problem

The solution to this problem is a single instance with multiple clusters. This ultraconverged infrastructure (UCI) enables IT to add nodes of vastly different types to the same instance. The instance then creates a single pool of the different clusters. At VergeIO, we call that single instance a data center operating system, VergeOS. VergeOS supports unlimited clusters (node types) and groups them into a global resource pool. IT can then use our Virtual Data Center (VDC) technology to assign specific clusters to specific workloads. VergeOS also provides global inline deduplication that eliminates data redundancy across all the various clusters, delivering the highest level of data efficiency without performance impact.

The result is a highly efficient and highly utilized infrastructure, which lowers costs and simplifies IT operations.

If you want to learn more, please register for our live webinar, “Beyond HCI — The Next Step in Data Center Infrastructure Evolution.” During the webinar, VergeIO’s Principal Systems Engineer, Aaron Reed and I will take you through an in-depth comparison of HCI vs. UCI. I’m even going to talk Aaron into giving you a live demonstration of the solution of VergeOS in action.

Anxious to learn more and don’t want to wait for the webinar? Subscribe to our Digital Learning Guide, “Does HCI Really Deliver?

Processing…
Success! You're on the list.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: ,
Posted in Article, Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,500 views