The volume and business significance of storage-intensive workloads (e.g. high-volume analytics) is rapidly growing. As a result, cumbersome and expensive legacy storage infrastructures are weighing heavily not only on business’ bottom lines, but also on their ability to compete effectively.
In their quest to radically simplify and reduce the cost structure of their storage environments, many IT shops are turning to hyperconverged infrastructure (HCI). HCI is a software-defined architecture that abstracts the storage area network (SAN) into the server hypervisor, and which can be deployed on commodity hardware. Effectively, HCI offers a path to simpler-to-manage, software-defined storage that bypasses the complexity associated with architecting and deploying a stand-alone software-defined storage architecture.
IT buyers should plan carefully, however, to ensure that this simplicity does not come at a price. Storage Switzerland defines the approach of deploying HCI software on commodity hardware as “HCI 1.0.” As HCI begins to serve applications with intensive storage capacity and IO requirements, however, HCI 1.0 is not suited to deliver the independent scaling of compute, storage, and the levels of resource utilization, that are required.
With the building block approach of HCI 1.0 solutions, if a customer requires more storage capacity, they must purchase an entire additional node; as a result, they are purchasing additional resources including the CPU, software licenses and networking that their workload does not actually need. The HCI 1.0 system ends up serving too much processing power to the storage.
To obtain the benefits of simplicity that HCI offers while keeping the total cost of ownership (TCO) in check, an optimized underlying hardware infrastructure is required. Furthermore, maximizing utilization of underlying hardware resources has the additional benefit of closing the performance gap of running these workloads on a virtualized, as opposed to bare metal, infrastructure.
IT planners should closely evaluate the bandwidth being allocated to back-end storage media (as opposed to front-end expansion cards, etc.). Especially in an all-flash environment, maximizing solid state disks (SSDs) per PCIe lane can help to provide sustainable, increased I/O bandwidth and also to reduce I/O latency – both critical to serving demanding, data-driven workloads. Furthermore, usable storage capacity increases. Increasing drive density also enables smaller clusters of servers to be created, which helps to increase CPU utilization and also to reduce software licensing costs.
The modern, data-driven workload set requires the utmost simplicity, but optimizing throughput, latency, and storage capacity are equally important to control costs. This is not an easy feat for IT planners. For additional conversation about how to get more out of your hyperconverged infrastructure in terms of better serving these requirements, access Storage Switzerland’s webinar in conjunction with Axellio, How to Put an End to Hyperconverged Silos.