Most enterprises have experienced early value and success with deploying hyperconverged infrastructure (HCI) for virtualized workloads. In fact, it is the ability to streamline the path to the on-premises private cloud through abstracting the storage controller into the hypervisor and increasingly also integrating software-defined networking (SDN) functionality, that has made HCI attractive for many organizations.
However, the reality is that one cloud does not meet all workload needs. Especially when it comes to serving data-intensive workloads such as streaming analytics that have demanding capacity requirements, require cost-effective and elastic compute, and that have particular data privacy concerns, the hybrid cloud model, whereby the workload is seamlessly portable on and off-premises, makes sense.
The “first generation” of HCI solutions typically provide some capabilities for virtual machines to be running in public cloud instances. Furthermore, many HCI vendors have invested in capabilities such as cloud gateways, centralized management and cloud service metering, that bring them closer to delivering a hybrid cloud experience to customers. Most of these solutions were designed to be run on commodity off-the-shelf (COTS) hardware, though. Delivering on the levels of processing power and storage memory and capacity that are required by mission-critical workloads without breaking the bank requires a more robust and better-designed underlying hardware infrastructure (as defined in a previous Storage Switzerland blog, “HCI 2.0”).
The hybrid cloud aims to marry the best of on-premises infrastructure (greater control and privacy, lower latency and fewer network bottlenecks) and public cloud services (compute elasticity). To deliver on this vision, the on-premises HCI solution must not only provide independent scalability of compute and storage resources; it must also take the next step by integrating and maximizing utilization of fast-performing but also expensive solid-state disk (SSD) storage media and non-volatile memory express (NVMe) access protocols. This flies in the face of the HCI 1.0 approach of using COTS hardware and of scaling by “just adding a node” (which may force the enterprise to significantly over-buy on infrastructure resources including compute, storage and networking). Designing the architecture to maximize central processing units (CPUs) per socket, for instance, can help to deliver more raw performance to the application without causing the enterprise to overbuy on storage capacity that might not be needed.
To serve as the basis of hybrid cloud architecture, an HCI 2.0 solution must also enable the growing number of cloud-native applications to be run locally, so that these applications can access faster Input/Output Operations Per Second (IOPS) performance and lower latency when needed. The ability to support containerized applications should also be a consideration for IT professionals, as more cloud workloads are being written to support the more granular security control, enhanced mobility and increased ability afforded by containers.