Why VMware Storage is STILL a Problem

Most VMware storage solutions attempt to fix storage pain points with a sledgehammer instead of a scalpel. While the sledgehammer approach does solve or at least mask some VMware storage problems, many still remain. More importantly, the life of the IT professional tasked with managing the VMware storage infrastructure the same way they always have didn’t get any easier as a result.

The All-Flash Sledgehammer

All-flash arrays, especially as they become increasingly affordable, help IT professionals solve one of their biggest challenges, VMware’s infamous IO blender. The IO blender is the result of multiple physical servers, each populated with potentially dozens of virtual machines continuously access the storage system, which becomes a choke point. Instead of prioritizing workload IO, the all-flash system resolves the issue by responding much quicker to IO demands than hard disk or hybrid systems can.

Like most sledgehammers, all-flash seems to solve the problem, but as the environment continues to increase virtual machine density and mix workload types, the IO blender problem creeps back in. IT professionals quickly learn that lower latency infrastructure is only part of the answer. Storage systems have to use all of their resources intelligently to provide balanced consistent performance. All-Flash arrays alleviate some of the IO blender problem because they reduce latency, not because VMware can tap into their raw IOPS capability. The lack of an intelligent storage system forces the organization to either not virtualize some workloads or to dedicate certain types of storage systems for each workload type.

The result is a management nightmare for IT professionals. They are having to constantly rebalance workloads across the various storage systems supporting the infrastructure and are in the dark about how the next new workload will impact the performance of the currently running virtual machines. For example, if the organization decides to virtualize a bare metal MS-SQL cluster the VMware administrator not only doesn’t know how much available resources they have they also can’t measure the impact of virtualizing the new workload. The only “alert” available is when uses start complaining about performance.

The Hyperconverged Sledgehammer

Another attempt at addressing VMware’s IO blender problem is hyperconverged infrastructure (HCI). HCI approaches vary but typically, they work by running a component of the storage software on the same hardware as the hypervisor and virtual machines (VM). They also keep a copy of each VM’s data locally on the server that is hosting the VM, as well as a distributed copy for data protection and to facilitate VM mobility. Ideally, the local copy facilitates all read IO, which reduces network traffic and the impact of the IO blender. Additionally, an increasing number of HCI architectures are also all-flash, further reducing the creation of an IO blender effect.

HCI scales by adding additional physical servers, typically called nodes, to the hypervisor cluster. Each node includes compute, memory, networking and storage. The problem is that most data centers don’t scale all of these components in lock step. Usually, each organization tends to need significantly more of one type of resource, than another type given the diversity of applications. For example, if the organization needs more storage, it buys a node with it to meet that need, but that node also comes with all the other resources as well, and those resources go unused. Additionally, each time IT adds another node to the hypervisor cluster, complexity increases, especially on the network.

HCI hits the VMware IO blender problem with an even bigger sledgehammer. It does leverage some intelligence by reducing the amount of read IO on the network but it increases the impact of write IO on the network. HCI sacrifices efficiency in its attempt to eliminate the IO blender.

The Innovation that VMware Storage Needs

Addressing VMware challenges like the IO blender, increasing VM density and efficiently leveraging more powerful servers requires intelligent application of infrastructure resources end to end, a deep awareness of the VMware operating environment and efficient scaling of storage capacity and performance. Instead of a generic system designed for a multitude of workloads that might happen to include VMware, IT should consider a storage system purpose built for VMware.

Our next blog will detail how VMware aware storage systems that operate at the VM, which is the atomic unique of a VMware environment, provide not only taking action on specific VMs but also empowers learning based on behavior of the VM and the VMs, visibility and control. The result is significantly more efficient systems that provide better overall performance and significantly reduces administration time.

Sponsored by Tintri by DDN

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,235 other followers

Blog Stats
  • 1,552,882 views
%d bloggers like this: