For the vast majority of enterprises, the question is not whether to go all-in on the public cloud, or to keep all workloads on-premises. Using both in a hybrid cloud architecture is required to meet applications’ wide-ranging cost, control and performance requirements. A precursor to getting there is not only modernizing on-premises infrastructure to be more cloud-like, but also consolidating on-premises infrastructure for multitenancy and more efficient operations. Hyperconverged infrastructure (HCI) provides a fast path to an on-premises cloud-like infrastructure. The problem is the first generation of HCI cannot consolidate to a single cluster that can run all workloads. Usually the first generation of HCI is limited to use case specific situations like virtual desktop infrastructure.
The Problem with HCI 1.0
The first generation of HCI, which Storage Switzerland calls HCI 1.0, deploys an HCI software stack comprised of server and storage virtualization and management on a commoditized, industry-standard x86 server. Commodity servers are inexpensive, but they carry a number of limitations when it comes to serving a hybrid cloud implementation. The most significant limitation lies in the peripheral component interconnect express (PCIe) architecture. Most commodity servers support up to 24 drives per node (four PCIe slots with 16 data lanes per slot), but the server doesn’t have enough bandwidth to take full advantage of the performance potential of those drives, if they are solid state drives (SSDs), because the PCIe bus and the single storage controller create a bottleneck. It is necessary to add additional nodes to meet capacity demands, as well as migrating other workloads off of the node to meet performance demands. This creates complexity and inefficiency in the form of underutilized data center infrastructure. The node has either too much compute power or too much capacity.
Creating a Consolidated Environment with HCI 2.0
The architectural limitations of HCI 1.0 lead to node sprawl, and the creation of clusters for specific workloads. HCI makes more sense when a single cluster can run more (or all) workloads. Doing so requires an optimized server hardware architecture that enables both capacity and compute to be scaled at the node level. The enterprise is then free to create high-performance non-volatile memory express (NVMe) nodes that can support more workloads, and a more diverse range of workloads. Putting both scale up AND scale out applications in a cluster requires a hardware architecture that can handle the storage density and heavy IO to keep noisy neighbors at bay while driving simplicity in data management. Virtual machine consolidation not only reduces the amount of hardware that is needed, but it also drives down node-oriented software license costs. Capacity-oriented nodes may also be created to serve storage-intensive workloads on a more efficient datacenter footprint. Lastly, but far from least important, minimizing node sprawl means that there is less infrastructure to manage. It requires fewer IT staff to manage more workloads, and the IT team has more time available to support more strategic activities as opposed to being bogged down with day-to-day infrastructure management.
While the concept of “just adding a node” is popular in the HCI world, “scaling-in” (scaling up before scaling out) can result in dramatic infrastructure and staffing cost savings as enterprises migrate to the hybrid cloud. In our next blog, we will evaluate Microsoft AzureStack HCI specifically, and the unique merits that it brings to both private and hybrid cloud implementations.
In our on demand webinar “Simplifying the Enterprise Hybrid Cloud with Azure Stack HCI” Storage Switzerland’s Lead Analyst George Crump and Advanced Computation and Storage’s President Robert Peglar dive deep on the challenges facing organizations looking to create a hybrid infrastructure and how Azure Stack HCI might help them. Register now to learn more.