Hyperconvergence Has To Scale Better – Scale Computing Briefing Note

Hyperconverged architectures are a natural fit for medium to large size businesses that can house all or most of their applications within their environment. But hyperconverged vendors need to help businesses get started and help them extend the hyperconverged use case as the environment scales.

Starting Smaller

Most modern businesses start by outsourcing most of their application requirements to cloud-based services. They may also have a few internal applications that run on a single system. But as the business scales there is value in insourcing these services and hyperconvergence is an ideal foundation for a growing a business’ application requirements. The problem is, because most hyperconverged architectures have an initial requirement of three nodes, the architecture is overkill for the business and the business starts building a virtualized cluster one independent server at a time. To get the business off on the right foot, hyperconverged architectures should start with a single node, and then scale-out as the business grows.

Starting with a single node, of course, requires that the hyperconverged solution can internally protect itself, and seamlessly extend that protection when adding other nodes. A replica protection scheme is ideal for this strategy. Initially, the replica’s can occur internal to the single node, then when other nodes are added those replicas can move to the new nodes.

Node Flexibility

One of the attractions to hyperconvergence is scalability. Each time IT adds a node to a cluster, that node comes with a prerequisite amount of storage capacity and compute performance. The problem is most data centers don’t scale these two resources in perfect unison. There are applications the organization will have that are very capacity centric and don’t require a lot of IO performance, then there are others that may have a very small capacity footprint but demand large amounts of IO performance.

Node flexibility solves this dichotomy. The hyperconverged solution should support nodes of different sizes and types within the same cluster. Mixing node types allows the organization to buy all-flash nodes for performance-demanding applications and high capacity nodes for applications that manage a lot of data. In fact, with the right software, an organization should be able to buy only one or two all-flash nodes and then have the replica’s stored on high capacity nodes.

Remote Office / Branch Office Convergence

For larger organizations with remote offices hyperconverged architectures are attractive because they provide IT with the ability to create a “data center in a box” like experience. In this use case, all the applications are pre-loaded and shipped to the remote office. Then IT just needs to plug the system in, and the office is up and running.

Using hyperconvergence for remote or branch offices underscores the value of a single node starting point. It enables the same software to be used throughout the enterprise. Remote cluster monitoring is also a critical feature. Most remote and branch offices won’t have local IT staff to pay attention to the system, so a robust remote monitoring capability is a must have.

Efficiency Matters

The ability to mix nodes sizes helps maximize the efficiency of the cluster as it scales. In addition to node flexibility, organizations need data efficiency to eliminate redundant copies of data and maximize capacity utilization.

Scale Computing Turns The Crank

Scale Computing is a hyperconverged vendor focused on the medium to large business market. Scale computing is also gaining the attention of larger enterprises, especially for the remote and branch office use cases.

Most hyperconverged vendors use VMware or Hyper-V as their hypervisor to which they add storage software services. Scale is unique in that its Hypercore Software is based on open source technology and tightly integrates its storage services. The result is a more cost effective approach to hyperconvergence that remains easy to install, deploy virtual machines and operate on a day to day basis.

Scale Computing is delivering version 7.3 of its hypercore software and rolled out several new node choices. The headliner for the update is the addition of post-process deduplication. Which means during idle periods the software will look for redundant data throughout the cluster and deduplicate it.

Post process deduplication sometimes is deemed less desirable than in-line deduplication because redundant data is stored for a period of time before being eliminated. But in the hyperconverged use case, post process may make more sense. In these architectures the CPUs are called on to perform a wide variety of functions ranging from running applications to delivering data services. Deduplication is a pretty heavy weight process, making sure that process runs only during idle times is a safe way of ensuring that it won’t impact other production processes.

In 7.3 Scale also adds Remote Cluster Monitoring and multiple user login. Remote clustering provides a new management view the monitoring of multiple clusters. The interface provides at a glance health and status views.

New Hardware

From a hardware perspective, Scale Computing is introducing two new nodes. The HC5150D has a per node compute capability of 16 to 20 processing cores and is configurable up to 768GB of RAM. The HC5150D can support 2.88TB to 5.76TB of SSD and 36TB to 72TB of Hard Disk Capacity.

The HyperCore software automatically moves data between tiers based on access patterns. Administrators can also give high priority to certain applications based on a weighing scale. A setting of zero means the VM will never use flash, a setting of 11 will mean that it almost always will use flash.

Scale Computing also delivers an all-flash node, the HC1150DF. It has similar processing capability as the HC5150D but comes in a flash only configuration (up to 7.68 TB).

Again, a key is Scale’s choice to support replicas as its data protection scheme. First, that choice means all reads are local to the node the VM runs on. The application does not need to deal with read latency. Second, nodes can be intermixed. An organization with read heavy applications could decide to configure a cluster with one or two HC1150DF nodes and multiple HC5150D nodes striking a balance between performance and cost.

StorageSwiss Take

Scale Computing is now a veteran of the hyperconverged market. It’s choice not to use VMware or Hyper-V is paying off in the markets it serves and is now getting the attention of larger organizations looking to avoid the tax that name brand hypervisor inflicts.

The challenge with hyperconvergence has always been dealing with growth of the cluster. Scale Computing’s use of replicas for data protection and its variety of nodes resolves many of those issues.

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,768 other followers

Blog Stats
  • 1,083,697 views
%d bloggers like this: