What is HCI 2.0

On paper, hyper-converged infrastructures (HCI) look like the perfect solution to most organizations’ IT woes. While the first generation of HCI solved organization’s point problems like virtual desktops and Tier 2 virtual workloads, they lacked the power and efficiencies required to support Tier 1 applications. They also could not support mixed workloads. Customers often had to stand up multiple HCI clusters for specific workloads, which lead most organization to HCI Sprawl.

When it comes to Tier 1 workloads, IT assigns most of these applications to a dedicated bare metal machine. IT never includes these applications in the HCI environment, which creates still more sprawl. Some HCI vendors claim that adding flash to their offering expands the use cases and increases workload varieties, but organizations continue to find these solutions lacking as evidenced by the continued sprawl.

HCI 2.0 is a combination of enhanced hardware and software designed specifically to meet the demands of the enterprise. It can handle both Tier 1 applications and multi-tenant, mixed workload environments, all within the same cluster. HCI 2.0 also delivers extreme cluster efficiencies which maximize the utilization of each node. HCI 2.0 not only reduces HCI sprawl it also reduces node sprawl by supporting a mixture of workloads with a minimal amount of highly utilized nodes.

Why HCI 1.0 Falls Short – The Hardware Does Matter

One of the supposed advantages of HCI 1.0 is that these systems leverage commodity hardware to create a cost-effective, scalable solution for organizations. These solutions also depend on off the shelf hypervisors like VMware’s vSphere or Microsoft Hyper-V, which need higher performance hardware to deliver the performance that Tier 1 workloads and mixed workloads require.

Scale-Out Doesn’t Fix Bad Performance

Most HCI vendors state that when an organization needs more performance it “simply” needs to add another node. There are several problems with this statement. First, nodes are never “simply” added. The addition of a node means that IT needs to make additional physical space, network connections and power available in the data center. IT then needs to physically and logically connect the node to the cluster, and HCI software must redistribute data to that node. Indeed, the ability to add a node to increase available capacity and performance is of value, but that expansion should happen only when the other nodes are at their limits.

The second problem with the statement of “simply” adding a node to increase performance is that most hypervisors don’t stripe CPU utilization across nodes. The virtual machine is, usually, 100% resident on one node at a time. Its CPU power and its IO path all come from that node. The method the HCI solution uses to store the VM’s data within the hypervisor cluster may also make the internal storage performance of the node more critical. The result is adding a node doesn’t improve the performance of a particular VM. It only allows the organization to redistribute other VMs to other nodes so that more CPU power is available to the first VM, which often doesn’t fix the original problem.

The Scale-Out Imbalance Problem

The net effect of a “simply add a node” mentality is node sprawl. Nodes, as recommended by the vendor, are added to address a performance challenge. The problem is that those nodes almost always come with storage. Most vendors sell their nodes with maximum capacity to eliminate the need to upgrade the individual node later. Very quickly the organization finds itself in one of two dangerous situations. First, they have too much unused capacity because they are adding nodes to try to address performance issues. Alternatively, they have too many CPU resources because they are adding nodes to address a capacity problem. It is extremely rare for an organization to scale both storage and compute resources in lockstep.

HCI 1.0 Doesn’t Reduce Costs

Even with the use of low-cost commodity nodes, HCI 1.0 doesn’t always reduce costs. In most cases, the organization adds more nodes than it should need and it ends up overbuying capacity or CPU. HCI 1.0 also doesn’t typically reduce complexity either. Most data centers with an HCI investment are not able to put all workloads in that investment because of performance concerns. As a result, they have to create additional dedicated HCI clusters for each workload, or they have to maintain some bare metal systems to run critical Tier 1 applications.

In our next blog, we introduce the concept of HCI 2.0 which maximizes the capabilities of each node to meet the storage density demands of Tier 1 applications and enables a variety of workloads on a single HCI cluster. The key is creating efficient, powerful nodes that can both scale-out but also scale deep.

In the meantime join Storage Switzerland and Axellio Inc. for our on demand webinar webinar “How to Put an End to Hyperconverged Silos.” In this webinar you’ll learn why current generation HCI solutions fall short and the essential requirements for HCI’s next generation.

Key Takeaways:

  • Learn Why HCI 1.0 Shortcomings Are Costing You Money and Adding Complexity
  • Learn Why Hardware Matters in HCI Solutions
  • Learn How HCI 2.0 is built with Tier 1 Workloads

Register now and receive a free copy of Storage Switzerland’s latest eBook, “What is HCI 2.0?”.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,498 views