A discussion about the storage infrastructure supporting virtual servers or virtual desktops almost always starts with a focus on performance. After all, solving the much talked about I/O Blender problem often takes center stage. But according to a poll conducted in our recent webinar, “The 5 Reasons Why Storage Is Eating Away Your Virtualization ROI And How To Stop It”, volume design and assuring application performance took top votes.
The Volume Design Problem
In a perfect world, you would just create one big volume and load all your virtual machines (VM) on that. The problem is that from that point forward, all VMs will receive the same storage I/O performance, no matter how important that VM is to the business.
There are several work arounds for this problem. First, you can make sure that the storage system is so fast that it can handle any of the storage I/O requests. This is the justification for All-Flash Arrays. But many data centers can’t justify or afford that leap, they need to get by with their current systems or use Hybrid Arrays.
Current systems can have SSD added to them and have a special SSD LUN created. They could also have a basic SSD appliance (less expensive that an All-Flash Array) installed into the infrastructure. Then the performance sensitive VMs can be migrated via Storage vMotion to either the special LUN or the SSD Appliance.
A Storage vMotion is not a trivial task and takes time but both of these solutions are workable, until you have too many special LUNs that need to be managed. Storage vMotion as a performance enhancer is also a problem when you need to increase the frequency that you need to move VMs between the storage tiers. In either case, the management overhead increases as does the importance of the storage infrastructure.
Hybrid solutions are another option. These solutions mix flash and SSD and automate the movement of data between the two tiers. The problem is that most of these systems are just promoting blocks of data to SSD. They don’t understand the correlation of those blocks to the VMs. The relationship is between the storage LUN and the host server not the VM. This makes tuning a specific VM for performance very challenging.
While some of these hybrid systems now have the ability to pin data to flash, that pinning is often at the LUN level not the VMDK level. So again, either special performance LUNS need to be created or all VMs have to be treated equally.
As our poll demonstrates, volume design and assuring virtualized application performance are the top concerns for IT professionals responsible for the virtual environment. As we discussed in the webinar, available on-demand below, what is needed is a granular understanding of the VMware environment, so that specific VMs can be given a performance priority without having to give that priority to all the VMs on a LUN.