Hyperconverged Infrastructure (HCI) is supposed to make scaling easier. When the architecture needs to deliver more compute power, storage performance or storage capacity, just add another node. The problem is “just adding another node”, especially as the number of nodes grows, creates complexity from both a data protection and network management standpoint.
A Challenging Start
HCI has a problem up-front. Although several HCI vendors now support two node configurations to support the remote office, branch office use case, most HCI vendors will agree that three nodes is the more optimal starting point. Even if the potential for resource waste is overlooked, the three-node preference makes for a more complex setup. Each of these nodes needs to be installed; they need to be networked, have the base hypervisor installed, and have the storage software (HCI software) installed. Then, the customer needs to verify that cluster communications are working and that the data protection scheme is doing what it is supposed to do.
For most HCI environments all of these components are “net new” despite the fact that most data centers already have compute, storage and of course a network. Most HCI solutions can not be installed on-top of existing hardware. The “net new” factor means that this is a fresh install, rack space has to be cleared, and all the physical hardware needs to be mounted and connected. While a net new dedicated shared storage system would require rackspace, it would be able to use existing network connections and the existing virtual server infrastructure would be to connect to it.
While it is true that some HCI vendors will turnkey parts of the implementation, there is a cost to that turnkey option. The more turnkey it is, more than likely, the more it will cost. It is also important to remember that in most cases network connectivity is the customer’s responsibility.
A Challenging Middle
Once the initial implementation is complete, the organization has to begin one of the more problematic types of migration, a migration between clusters with what are effectively different storage systems. While vMotion (i.e., not Storage vMotion) is designed to handle this transition, the network settings have to be correct. And the hope is it won’t introduce downtime, troubleshooting confusion nor performance impact.
Obviously, the first few virtual machines (VM) on the new HCI platform will probably run flawlessly, but as more and more VMs are migrated over managing performance, especially storage, performance becomes more difficult. Remember all storage services, including in most cases data protection, are running within the hypervisor architecture, typically as a VM. As a VM begins to demand more and more performance, the storage VM will also spike as it responds to that demand. The shared everything nature of HCI makes troubleshooting which VM is causing the performance problem very difficult to identify.
The post-installation period will also likely be the first time the customer encounters the data protection challenge. Most HCI architectures count on either replication or an erasure coding like scheme to protect and share data across the cluster. These schemes count on X number of protected copies of data, where X is the number of node failures the architecture can sustain before data loss or application outage.
The challenges with these node-based protection schemes are that when a node fails, the HCI software immediately tries to correct the problem by creating another copy of the data of the fallen node on another node. The recreation of data on another node is a bandwidth and processor consuming task, but obviously, in a real failure situation, this is exactly what should occur. But, what about a case where the node goes down but has not failed? A temporary network connection problem, routine node maintenance or a node reboot might all trigger massive copies of data.
This concern is especially legitimate in the HCI world. There are so many processes all sharing the same CPU, storage, and network, that any of them might cause a disruption. And all of these processes are so dependent on these resources that a loss of one of them can have a widespread impact.
Some systems might have a feature to delay the data rebuild process, they might also have a manual switch to turn off automatic rebuild, but most vendors offer nothing of the sort.
A Challenging Growth Process
Again, scaling HCI requires the addition of a node. That node, of course, has to be physically installed, connected to the network and joined to the cluster. This process occurs every time a node is added no matter what the reason. When adding that node, the data protection scheme changes, in most cases, automatically with portions of data being moved to the new system freeing up capacity on all nodes, but putting strain on the network.
As node count continues to increase, the network becomes increasingly complex, especially as the node count grows into the double digits. At these double-digit levels, the communication between nodes is incredibly high, representing as much as 70% or more of the cluster’s network IO. Not only are there more nodes to distribute data across there are obviously more new workloads or increasingly busy existing workloads (hence the reason for the expansion).
For the HCI environment to continue to run smoothly, IT needs to make sure the network is rock solid, simple configuration mistakes such as improper port tagging can impact cluster performance. In many cases, the data center ends up investing in high performance and higher quality IP network components to make sure the cluster communications quality is high.
Is Dedicated Shared Storage Less Complicated?
HCI’s competitor is traditional dedicated shared storage. The “pitch” is that dedicated shared storage is too expensive and too complex. In reality, everything, at scale, will have a degree of complexity but new dedicated shared storage does not carry the same concerns that HCI storage architectures do. The processing of those systems is dedicated to the task of storage IO and storage features. Failure of shared storage system components is rare; they are not designed with failure in mind. Even startup, assuming that the organization will want to keep its server investment, should be easier. The new, shared storage system can be added to the same hypervisor cluster, and Storage vMotion can be used to move data between the old and new system.
When it comes to scale, HCI has some very specific concerns that IT needs to consider carefully before making the jump. More often, it’s the network and data protection aspects that get forgotten easily. However, they are critical. If nodes are expected to be added to an HCI environment with any level of frequency, it may be easier to use existing servers and simply upgrade the existing storage investment.
Sponsored by Tintri