Is Hyperconverged worth the Hype?

Hyperconvergence is capturing the attention of IT professionals. The apparent simplicity of the technology is certainly appealing to an IT staff that is often stretched too thin to properly manage the environment. As a result the IT staff is often responding to requests for more IT resources by haphazardly applying more hardware. Hyperconvergence promises to change all that. Each node incrementally increases the compute, networking and performance capabilities of the environment. For an over-worked IT team, the hyperconverged approach probably seems ideal but hyperconvergence is not without its issues. IT needs to evaluate both the good and the bad of hyperconvergence to see if the technology will meet the demands of the organization.

The advantages of hyperconvergence have been, well, hyped by vendors and the press. Simplicity is a key theme of a hyperconverged solution. As described earlier the environment scales incrementally as IT adds nodes to the hyperconverged cluster. Assuming the addition of those nodes are in response to either a demand for more compute resources, storage capacity or storage performance then each node addition should, in theory, solve the problem.

In addition to simplicity, hyperconverged solutions may also be less expensive. In most cases they use internal server class storage instead of the enterprise class drives that shared storage systems use. The hyperconverged solutions then either replicate data between nodes, or they aggregate the internal storage of each node into a virtual volume. To alleviate reliability concerns, hyperconverged storage software often compensates for the use of these less expensive server class drives by increasing the level of redundancy. Extra redundancy is good, but it does add to the cost of the solution and lowers its efficiency.

The Three Disadvantages of Hyperconverged Architectures

While the advantages of hyperconverged solutions are impressive, no single solution is suitable for every situation. The first downside is the inability to granularly address a performance requirement. Most data centers will have a specific application that must receive a certain performance level. Of the three resources in question typical storage I/O is the biggest concern. The storage software has to compete with hosted virtual machines and other processes for CPU cycles so the performance potential of storage I/O may fluctuate considerably. In the aggregated model, IT professionals also need to take into account the inherent latency of a cluster, especially as that cluster scales.

In situations that require specific storage performance, it is preferable to have a dedicated shared storage system. IT can isolate volumes and both IP and FC storage networks have the ability to provide some level of end-to-end quality of service.

The second downside is the way that the hyperconverged architecture scales. Again, as you need more resources, you add more nodes. But the reality is that when a data center needs more of something (compute, storage, networking) it typically does not need all three at the same time. Instead, it typically only needs one and in many cases it needs that one thing far more often than it needs the others. Which of these resources the data center needs more of will vary from organization to organization. But in most cases they need more of one than they do the other two. The result is that as the hyperconverged cluster scales it becomes out of balance. For example if the primary motivation for expansion is storage capacity, then the cluster will end up with extra compute resources that go to waste.

Once again, if an organization knows that its data center will scale one particular component of the resource trifecta, then it should take another look at a more traditional architecture that has the ability to add compute, storage capacity, storage performance and network bandwidth independently. The apparent complexity a multi-tier architecture might add is often overstated as many storage software solutions can automate the allocation of these resources.

The third downside is simply one of vendor lock-in. Many hyperconverged systems are often sold as turnkey appliances and additional nodes are only available through that vendor. Another challenge with this lock-in is that these solutions are their own independent silos and are often unable to leverage the existing servers and storage systems in the environment.

While there are software-only hyperconverged solutions, they introduce a different type of complexity. With these solutions IT becomes the evaluator and integrator of both the hardware and software for all three tiers.

A balance can be software defined storage. It allows the organization to leverage its current assets and expand with more cost effective tier two systems. Because storage is a separate tier with different classes of storage, expansion to meet business demands can be very granular. Further, many of these solutions can automate the movement of data between classes of storage.

StorageSwiss Take

There is not a one-size fits all technology solution that will work for every data center. Even within the data center, there are bound to be multiple and conflicting demands of quality of service. Hyperconverged may be ideal for data centers where the aggregate performance of the hyper-converged cluster is more than adequate for all workloads so that all service level agreements can be met without the need to fine tune the environment for a specific use case. For organizations that need specific guarantees and have vendor lock-in concerns, traditional three tier architectures may still be their best option.

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , ,
Posted in Blog
4 comments on “Is Hyperconverged worth the Hype?
  1. […] that camp and George Crump, an analyst for Storage Switzerland, has a great write-up here – Is Hyperconverged worth the Hype? – on the pros and cons to that market.  I think a key takeaway is that if you’re looking […]

  2. mletschin says:

    Nice comparison for the Pros and Cons. I was just posting a similar article around how Nexenta views the solution. http://blog.nexenta.com/2016/03/23/questions-from-the-field-hyperconvergence/

  3. Very good points George. Not to make this a company/product promotion but Maxta http://www.maxta.com totally agrees – those are the con’s of other Hyperconverged solutions but not Maxta. We built into our Core the ability to vary block size and storage policies per VM, our solution allows customers to add compute or storage independently (including increasing storage capacity to existing deployments) and we don’t sell an appliance, software only running across all standard Intel solutions. We also allow our customers to transfer their license to new hardware at no cost if they wish to upgrade to the latest Intel architecture avoiding vendor lock-in.

    Thanks for pointing these issues out, Maxta has a solution that addresses them.

  4. Alan Conboy says:

    Interesting take on the landscape George, But it does have one fundamental flaw. When you talk about scaling, the assumption in the article is that all hyperconverged nodes from a given vendor are identical. While that may have been the case in 2011/2012, not so in today’s world. For example here at Scale Computing, we offer a range of nodes that cover everywhere from the very basic, the cpu heavy, the storage only , etc. This means that while you can add conventional nodes, you can also add storage only nodes, add CPU and add RAM to existing nodes, field upgrade to 10 gig ether, as well as mix and match nodes of very different resource profiles and hardware types, etc. When an HCI appliance is done properly, it very much becomes a case of Lego bricks meeting datacenter – i.e. adding just the type of resource that you need, exactly when you need it without having to over buy on the front end. Just this one piece of the equation can radically improve efficiency across the board from a TCO perspective.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 20,877 other followers

Archives
Blog Stats
  • 638,324 views
Follow

Get every new post delivered to your Inbox.

Join 20,877 other followers

%d bloggers like this: