Hyperconverged 101 – Understanding the Components of HCI – Part 2 – Storage Software

In the last blog Storage Switzerland discussed the heart and soul of a Hyperconverged infrastructure (HCI), the hypervisor. But a group of servers clustered via a hypervisor is not HCI, it is merely virtualization. For any virtualized environment to deliver its full potential it needs shared storage of some sort. The share storage component presents a challenge and for HCI the opportunity to displace dedicated shared storage systems. However, HCI must still somehow simulate a shared storage infrastructure so key capabilities like live virtual machine migration can occur. The hypervisor needs storage software added to it in order to fulfill that need and create HCI.

The Storage Software Behind HCI

In HCI, storage services are delivered by adding storage software to the hypervisor. The storage software can run as a virtual machine along with other virtual machines or it can be integrated into the hypervisor itself. It is essentially software defined storage (SDS) but with one important addition, the software manages the storage IO across each node within the virtualization cluster instead of managing a dedicated storage device. Other than the cluster-wide data management addition, the HCI storage software provides similar services to other SDS solutions like snapshots, cloning and data protection.

Understanding Data Distribution

It is important to understand how the HCI storage software component delivers data protection and shared data access. Today, there are essentially two methods available. The first is replication. Replication copies a VM’s data store as data within it is changed or added. The replication copies can go to a predefined number of destinations.

If an IT administrator needs to migrate a VM then they can do so by moving the virtual machine to another node. The preference is to move the node to a physical server that is one of the replication targets. If the administrator for some reason needs to move the VM to a physical server without direct access to the data store, then the software should enable remote access to the data. Replication has an advantage in that it is relatively light on network requirements and most of the data IO is from the directly attached storage. The downside is that it is making full copies of data on each of the target nodes, typically a minimum of three, which consumes disk capacity.

The other data protection and shared data access method is erasure coding which stripes data across nodes in the cluster. It is similar to RAID 5 and 6 in that it creates a parity bit to reassemble data if one of the nodes fails. The advantage of erasure coding is that it only exacts about 25 – 30% data consumption overhead compared to replication’s 100 to 300% increase. The downside to erasure coding is it does require more network resources, since every IO involves the network, as well as CPU resources to handle the EC calculations. Also, VMs do not benefit from direct access to local storage for read IO like a replication model

How is the Storage Software Delivered?

Another important question for IT planners to consider is should they use the HCI software from the provider or choose a stand-alone SDS solution that runs in a HCI configuration? Each of the major hypervisor vendors has their own storage software option. VMware provides vSAN and Microsoft provides Windows Storage Spaces Direct. Nutanix provides their software as an alternative to vSAN and Windows Storage Spaces Direct but it also has its own hypervisor, Nutanix Acropolis, which more tightly integrates with their software.

There is also the option to add the storage software to the hypervisor. There are SDS solutions available for most major hypervisors. Companies in this space are almost too numerous to mention but include DataCore and StorOne.

A final option is a completely turnkey solution where the vendor supplies the hardware and the software. Most of these use off-the-shelf software components and pre-bundle them with hardware. They may include their own storage software or make specific optimizations to the hypervisor. Examples of these solutions include Pivot3, Scale Computing and Nutanix. Also, almost every major server vendor has a relationship with both Microsoft and VMware to bundle a version of their hypervisor and storage software with their server hardware.

Which is Best?

The challenge for IT planners is deciding which option is the best for them. The SDS market is volatile so partnering with an SDS vendor does come with a degree of risk but often that risk is offset by more advanced features, better performance or better pricing. Using storage software from the hypervisor vendor is potentially the safest route. The big three hypervisor vendors (VMware, Microsoft and Red Hat) are likely safe bets for long-term viability. These companies have also vastly improved the capabilities and performance of their software offerings over the years catching up with most but not all of the SDS vendors. Turnkey solutions are typically preconfiguring one of the above on their server hardware. While this approach simplifies installation and potentially support, it is not really a technology decision.

The best choice really depends on the organization. If the organization needs better performance and is comfortable with the risks of working with a younger company, then these options provide a lot of flexibility. For other companies that don’t need the performance of the other solutions or that are more risk averse, the built-in solutions are very appealing, especially considering recent updates.

There is also still another factor to consider that may sway the decision, is the server node used? In the past the server node was not a major consideration point since they were all relatively basic white box servers. Recently however, new options have entered the market that offer a premium hardware solution, optimized for the hyperconverged use case. These solutions can result in lower node counts, better performance and reduced costs. The selection of the right server hardware is the focus of our next blog in this series.

In the meantime check out our on demand webinar, “How to Put an End to Hyper-Converged Silos.” As part of the webinar you’ll also get an exclusive copy of our eBook “What is HCI 2.0”.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,514 other subscribers
Blog Stats
%d bloggers like this: