The hyperconverged infrastructure (HCI) market has entered a new phase of maturity, as evidenced by recent technology developments as well as vendor consolidation. For IT professionals, this maturity has two significant takeaways. From a technology perspective, HCI brings a compelling value proposition to a number of mission-critical, traditional mainstay and future forward workloads (including hosting virtual desktop infrastructure, as well as more modern edge computing, IoT data processing, and artificial intelligence workloads). From a vendor perspective, the market has largely consolidated around industry heavyweights that existed before the advent of HCI. However, there are some exceptions, especially for IT professionals looking for targeted infrastructure, such as for a remote or branch office.
What is HCI, and why does it matter for my workloads?
HCI consolidates server and storage virtualization onto the same server node. With the uptick during the past two to three years in alliance activity between HCI software and software-defined networking vendors, HCI also increasingly unifies management of software-defined compute (virtual machines), storage and networking. HCI’s core value proposition to date has centered on simplification (centralized management, and streamlined deployment through pre-integrated appliances) as well as cost efficiency (fewer storage management headaches and lower hardware price tags through the use of industry-standard as opposed to proprietary hardware).
Organizations varying in size and in industry have found value from initial HCI deployments that typically centered on point applications, test and development workloads and virtual desktop infrastructure (VDI) hosting. Vendors are responding with the development of the next generation of HCI solutions that are better suited for handling mixed workloads, as well as modern workloads such as hybrid cloud service hosting and big data analytics processing at the edge (more on how Storage Switzerland defines “HCI 2.0” can be found in this recent blog).
Where the first generation of HCI largely treated underlying hardware as an inconsequential commodity, HCI 2.0 solutions are designed to better optimize underlying hardware, for instance through independent scaling of compute and storage resources and increased utilization of CPUs and storage capacity and memory. Hardware optimization better positions enterprise IT organizations to maximize performance and capacity without overprovisioning, and to pool infrastructure resources. Both are critical capabilities when it comes to serving demanding, Tier One applications cost effectively, and when it comes to bringing the elasticity, responsiveness and multi-tenancy inherent in public cloud services to on-premises workloads. Meanwhile, it stands to deliver higher availability and more predictable performance to these mission critical workloads.
Complementing enhanced capabilities for private cloud service hosting, many vendors have also integrated support for public cloud services, for instance auto-tiering between on and off-premises storage resources for more optimal data placement, as well as for container management. Especially as IT consolidates a greater variety of workloads onto HCI, the ability to optimize data and application placement while still retaining as much centralized visibility and control as possible will go far. HCI will play a role in future data centers that have abstracted and composable, heterogeneous resources to address application-specific needs.
In addition to better serving hybrid cloud workload consolidation, HCI 2.0 is better suited for data-intensive workloads, including artificial intelligence and high-volume analytics at the edge. The simplicity of HCI makes it an attractive target for edge, remote and branch office implementations, and for dealing with the influx of data that must be captured and analyzed. For instance, this is a clear value proposition for high-velocity analytics at the edge. More and more data being generated and processed outside of the traditional data center creates a significant new opportunity for HCI. Key for IT professionals to bear in mind is that these data-intensive workloads require new levels of processing power, memory and IOPS utilization. HCI vendors are introducing higher-memory CPUs and GPUs, faster connectivity, and fast-performing NVMe media into their solutions to address these needs. IT planners should closely evaluate the solution’s ability to optimize these investments from an architecture perspective, for example by maximizing throughput and drives per server node. Storage Switzerland recently published additional analysis on how HCI 2.0 can be utilized to serve edge environments and data-intensive workloads.
Market Maturity Results in Software Vendor Consolidation
From a vendor perspective, the HCI landscape has always been a complex and splintered mixture of hardware and software vendors. IT professionals have a slew of stand-alone software platforms, reference architectures and pre-configured solutions to choose from.
A sign of the market’s maturity is consolidation of the vendor landscape over the past couple of years, most recently with software vendor Maxta closing its doors in late January 2018. Maxta’s shutdown and the ongoing success of VMware and Microsoft in the HCI market reflect the challenges inherent in disrupting and displacing well-entrenched server virtualization platforms. The startups that remain in business and have been most successful, by and large have been able to carve out a niche as opposed to attacking the market as a whole.
For instance, StarWind, Scale Computing and StorMagic cater to small-to-midsized businesses (SMB) and ROBO environments. Their unique capability to create highly available two node clusters makes them ideal for these environments. Meanwhile, Datrium differentiates around disaster recovery and a hybrid hyperconverged architecture that stores older data on a centralized data store. Pivot3 continues to enjoy substantial success in its core video surveillance wheelhouse. For its part, Axellio has gone its own unique route in creating hardware that is designed for high-performance data processing in an HCI architecture but leverages leading SDS vendors’ software, including Microsoft Hyper-V’s Storage Direct, VMware’s vSAN and Nutanix’s Acropolis.
The exception to this generalization is Nutanix. Nutanix had the benefit of being early-to-market (widely accepted as the HCI pioneer) with a clear value proposition that translated broadly (cutting out the complexities of siloed legacy infrastructure). At the same time, its portfolio and its messaging matured quickly, adopting a focus on more broadly facilitating enterprise cloud computing, and Nutanix ensured compatibility early on with leading server hardware and virtualization platforms. This provided solid footing for Nutanix to expand across the data center and into the edge and hybrid cloud, with targeted acquisitions (such as Minjar for multi-cloud cost and performance management), alliances (such as with SDN vendor Cumulus Networks) and R&D.
If HCI is accelerating the shift to “software-defined” infrastructure, where does this leave the major hardware vendors?
HCI is blurring the lines between hardware and software, and is opening new opportunities to compete across the infrastructure stack. As a result, we have seen Dell’s acquisition of EMC, Cisco’s foray out of the network, and HPE’s push into software-defined storage and networking. While these vendors have invested in acquisitions (Dell/VMware, Cisco/SpringPath, and HPE/SimpliVity) to build their HCI businesses, other vendors including IBM and Lenovo have focused more on alliances. Especially as HCI supports mission critical applications, these vendors will continue to play an important role.