Hyperconvergence – Scale-Up vs. Scale-Out

One of the attractions of a hyperconverged architecture is when the organization needs more compute or storage resources, it simply adds another node. But hyperconverged vendors seldom talk about the downsides to this approach. There may be times when it is better to scale-up instead of scale-out. The problem is most hyperconverged solutions make it very difficult to scale-up or at least they make it very expensive.

Scaling-Up vs. Scaling-Out

The concept of scaling-up is often the target of a lot of criticism in IT vendor circles because of the forklift nature of the upgrade. If scale-up is the only option then the organization needs to buy a whole new server or storage system and replace the old one after migrating applications and/or data to the new unit.

Scale-out on the other hand, means adding a node to the existing cluster and storage and compute expands automatically. New applications and new data can start leveraging the increase in resources, or the system automatically shifts certain data sets to the new node to better balance out the cluster.

The problem is scale-out isn’t a perfect approach either, as nodes are added to the environment they typically come with compute and storage, even if the need is for only one of those areas. Some vendors have a solution by allowing mixed node types, some can be compute heavy, others can be storage heavy. But in either case, the new node does take up data center floor space, power, cooling and consumes additional network ports.

The Advantages of Scaling-Up

There are times where scaling-up actually makes more sense. CPU power will continue to increase, as will the number of cores per CPU. Storage IO connections will also continue to improve and become less latent, as we are seeing right now with NVMe. What this continual means is today’s server (node) will be able to support over three times the number of virtual machines than last year’s server technology.

Considering most data centers are space constrained, doing more in less space has a big appeal. The problem is that most hyperconverged architectures don’t make the migration to new hardware cost effective.

The Hyperconverged Scale-Up Problem

Every hyperconverged system expands by adding new nodes to the existing cluster. And most support the decommissioning of an old node. As a result they scale-up over time by adding new nodes and decommissioning old nodes. The problem is most hyperconverged solutions are bundled affairs, with the vendor being the sole source for the hardware and software.

The first challenge with this bundled approach is the customer has to wait for the vendor to support the latest server hardware, and their timing may not match Intel’s, even though there are plenty of manufacturers that support the latest CPUs and internal networking architectures the moment Intel ships its latest iteration. The customer of the bundled hyperconverged vendor has to wait until that vendor refreshes its product line, which may take more than a year.

The second challenge with the bundled approach is the way these systems are packaged. The software and hardware are tied together, the software license does not transfer to the new equipment. Essentially they are scaling-up, all they need is new hardware, not new software. They are effectively buying the software again even though they already have a copy. This double charge would be like having to re-buy Microsoft Office when a new laptop is purchased instead of just downloading it and applying the existing license. But the bundled vendor locks the two together.

The Scale-Up Hyperconverged Solution

The solution is to allow the software license to be transferable to the new node. Transferability is a key advantage of software first hyperconverged solutions. These vendors provide the hyperconverged solution as either software-only or turnkey with hardware but the two packaging arrangements can be intermixed.

For example, a new customer can start with a turnkey system that includes pre-installed and configured software. They can expand the cluster by adding new nodes as they need. After a few years when they want to upgrade the cluster, instead of expanding, they can purchase just the hardware they need and transfer the software license from the system that is being de-commissioned. Also since the solution is software focused, they can use the server hardware of their choice, including vendors that are faster to adopt new technologies.

StorageSwiss Take

The ability to scale-up the hyperconverged cluster in addition to scaling it out is an important advantage for software hyperconverged solutions. To learn the other advantages check out our on demand webinar, “Showdown: Hardware-Based Hyperconvergence vs. Hyperconvergence Software”.

Watch On Demand

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,949 other followers

Blog Stats
  • 1,322,212 views
%d bloggers like this: