Hyperconvergence is capturing the attention of IT professionals. The apparent simplicity of the technology is certainly appealing to an IT staff that is often stretched too thin to properly manage the environment. As a result the IT staff is often responding to requests for more IT resources by haphazardly applying more hardware. Hyperconvergence promises to change all that. Each node incrementally increases the compute, networking and performance capabilities of the environment. For an over-worked IT team, the hyperconverged approach probably seems ideal but hyperconvergence is not without its issues. IT needs to evaluate both the good and the bad of hyperconvergence to see if the technology will meet the demands of the organization.
The advantages of hyperconvergence have been, well, hyped by vendors and the press. Simplicity is a key theme of a hyperconverged solution. As described earlier the environment scales incrementally as IT adds nodes to the hyperconverged cluster. Assuming the addition of those nodes are in response to either a demand for more compute resources, storage capacity or storage performance then each node addition should, in theory, solve the problem.
In addition to simplicity, hyperconverged solutions may also be less expensive. In most cases they use internal server class storage instead of the enterprise class drives that shared storage systems use. The hyperconverged solutions then either replicate data between nodes, or they aggregate the internal storage of each node into a virtual volume. To alleviate reliability concerns, hyperconverged storage software often compensates for the use of these less expensive server class drives by increasing the level of redundancy. Extra redundancy is good, but it does add to the cost of the solution and lowers its efficiency.
The Three Disadvantages of Hyperconverged Architectures
While the advantages of hyperconverged solutions are impressive, no single solution is suitable for every situation. The first downside is the inability to granularly address a performance requirement. Most data centers will have a specific application that must receive a certain performance level. Of the three resources in question typical storage I/O is the biggest concern. The storage software has to compete with hosted virtual machines and other processes for CPU cycles so the performance potential of storage I/O may fluctuate considerably. In the aggregated model, IT professionals also need to take into account the inherent latency of a cluster, especially as that cluster scales.
In situations that require specific storage performance, it is preferable to have a dedicated shared storage system. IT can isolate volumes and both IP and FC storage networks have the ability to provide some level of end-to-end quality of service.
The second downside is the way that the hyperconverged architecture scales. Again, as you need more resources, you add more nodes. But the reality is that when a data center needs more of something (compute, storage, networking) it typically does not need all three at the same time. Instead, it typically only needs one and in many cases it needs that one thing far more often than it needs the others. Which of these resources the data center needs more of will vary from organization to organization. But in most cases they need more of one than they do the other two. The result is that as the hyperconverged cluster scales it becomes out of balance. For example if the primary motivation for expansion is storage capacity, then the cluster will end up with extra compute resources that go to waste.
Once again, if an organization knows that its data center will scale one particular component of the resource trifecta, then it should take another look at a more traditional architecture that has the ability to add compute, storage capacity, storage performance and network bandwidth independently. The apparent complexity a multi-tier architecture might add is often overstated as many storage software solutions can automate the allocation of these resources.
The third downside is simply one of vendor lock-in. Many hyperconverged systems are often sold as turnkey appliances and additional nodes are only available through that vendor. Another challenge with this lock-in is that these solutions are their own independent silos and are often unable to leverage the existing servers and storage systems in the environment.
While there are software-only hyperconverged solutions, they introduce a different type of complexity. With these solutions IT becomes the evaluator and integrator of both the hardware and software for all three tiers.
A balance can be software defined storage. It allows the organization to leverage its current assets and expand with more cost effective tier two systems. Because storage is a separate tier with different classes of storage, expansion to meet business demands can be very granular. Further, many of these solutions can automate the movement of data between classes of storage.
There is not a one-size fits all technology solution that will work for every data center. Even within the data center, there are bound to be multiple and conflicting demands of quality of service. Hyperconverged may be ideal for data centers where the aggregate performance of the hyper-converged cluster is more than adequate for all workloads so that all service level agreements can be met without the need to fine tune the environment for a specific use case. For organizations that need specific guarantees and have vendor lock-in concerns, traditional three tier architectures may still be their best option.