How Storage Vendors Integrate VVols

3789 Nexgen Small BannerVVols simplifies the process of setting up and managing storage to support VMs by enabling VMware administrators and users to create specific storage policies. But under the covers VMware leaves the implementation of VVols to the storage array vendor. In this article we will compare different ways that storage vendors can implement VVols, different levels of VVols integration, and what they mean for a typical VMware environment.

VM-level Management

A storage container is a logical storage volume that is created from the back-end storage to support specific policies. VVols allows for the creation of large numbers of storage containers, as many as required to provide the combination of storage characteristics needed by the VMs in the environment. This allows for the VM-level storage management that’s part of the appeal of VVols. This implementation, which is done through vSphere Storage API for Storage Awareness (VASA), involves a translation from user-oriented policy rules like availability or performance, to storage-specific characteristics, like media type or number of drives.

VVols Depend on Back-end Storage

How a particular array performs that translation of policies into storage characteristics is completely up to the array vendor (within the structure of VASA) but is dependent upon the array architecture. For example, a storage system with flash drives could support a ‘high performance’ container by simply creating it out of any available space, since flash performance is essentially homogeneous. A disk-based array would most likely put that same container into a high spindle-count, RAID10 volume.

This fact, that storage characteristics are functions of the back-end storage itself, means that changing the policies associated with a particular VVol requires migrating that VVol into another container. For example, if a VVol needed more performance that could mean moving it from a disk-based container to a flash container, something VMware can do with Storage vMotion. And although this process can be automated with VASA APIs, it still requires that the VM be “stunned” and data be moved.

This requirement to migrate VVols between containers, and specifically, between the physical storage behind them, can impact VM performance, since the array controllers and the host processors must handle this overhead. The act of stunning or quiesing the VM will impact performance as well. And when migrated between separate storage systems, network latency must also be taken into account. How they handle this migration is one of the things that differentiates the way storage systems support VVols.

For this comparison of VVols implementation, we’ll use the following overly simplistic scenario. A VMware Administrator using VVols has created two policies, Gold and Silver, to support VMs with different performance requirements. The Gold policy maps to a container that’s comprised of only flash drives and the Silver policy maps to a container comprised of only disk drives.

How Storage Systems Implement VVols

At the most basic level of VVols compatibility the VMware environment above could use two storage systems. Array 1, an all disk array, would support the Silver container with a disk-drive storage pool and Array 2, with all-flash drives, would support the Gold container.

The VMware administrator only has to create policies for each tier that describe the specific performance they need (Gold or Silver) and then choose which one they want to assign to each VVol. But the storage system still needs to put those VVols into the right physical storage pools (that support each container), and if they want to change policies, the system has to move those VVols to other pools. In this case, if a VVol’s policy is changed from Gold to Silver, then data is migrated from Array 2 to Array 1.

At a higher level of VVol integration both containers could be housed in the same storage system, assuming the array had the ability to create both the flash and disk storage pools. This could be a traditional enterprise array or a hybrid storage array that has flash and disk-drive tiers. In this scenario, Gold would be carved out of an all-flash tier and Silver would be a disk-drive-only tier.

Users can change storage policies for VMs without moving its VVols between arrays, but data must still be migrated between containers (and tiers) on a single array. This eliminates network latency and the processing by two different storage controllers, but still involves migrating data.

No VVol Migration

Hybrid storage arrays that leverage caching and automated data placement can support a higher level of VVols integration. These systems are continually updating that placement of data to reflect current usage. When a file is left idle, the system is moving it to a lower performing tier automatically and to a higher tier when it’s active again. This takes the inherent capabilities of hybrid arrays, the ability to continuously update data placement based on current policies, and applies them at the VM level, instead of at the LUN level.

This level of integration also eliminates the need to change performance policies, since the system accommodates fluctuating performance demand automatically. The result is a storage system that’s ideal for VVols. These systems also simplify the creation of storage containers, since they essentially use a single container to support all performance policies. However, they also assume adequate resources will be available to support all VVols.

The I/O Blender and Overprovisioning

When the environment is running under ‘normal’ conditions these basic data placement algorithms typically work fine. But as VMware administrators know, things can change quickly in a virtualized environment. When activity peaks the “I/O Blender” effect can kick in causing storage resource contention. This can result in performance issues or the need to overprovision in order to cover these peak demands for all VMs, since the system can’t tell which VMs are the most critical.

Some hybrid storage systems are providing a software-defined Quality of Service (QoS) capability that can actually prioritize VVols when there is contention for performance on the system. This eliminates the need to overprovision in order to accommodate all VMs and guarantees that the most critical VMs get the resources they need first.

Summary

VVols have the potential to improve management and increase the efficiency of storage systems supporting virtual environments. But the implementation of VVols varies between vendors creating very different experiences for both users and storage administrators, especially when storage policies are changed.

At a basic level, most VVol implementations will provide VM-level, policy-based storage management. But they can still require VVol migration when policies change, leaving the ‘heavy lifting’ to the storage system. This can be a big problem as users, removed from the actual data handing in the underlying storage, may be unaware of the performance impact that VVols are putting on the system.

Hybrid storage arrays improve on this basic implementation and address the problem of data migration. By leveraging their ability to automatically move data blocks between storage tiers they can eliminate VVol migration and consolidate storage containers, but may still have trouble when storage demand spikes.

Some hybrid storage systems have added a software-defined QoS capability to resolve these issues with storage contention. By prioritizing VVols they can ensure that the most critical VMs get the resources they need first, even during peak demand periods. This implementation of VVols looks to provide the power and simplicity of VM-level management while maintaining storage performance and efficiency, even under the unpredictable conditions common in virtual environments.

Sponsored by NexGen Storage

About NexGen Storage

NexGen builds hybrid flash arrays with data management capabilities that enable their customers to prioritize data and application performance based on business value. The goal is to avoid the high cost of all-flash arrays that treat all data as high-value data. Their integration with VVols allows for even more intelligent data prioritization without the penalty of having to move data between different physical storage systems or between storage containers on the same system.

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , ,
Posted in Article
4 comments on “How Storage Vendors Integrate VVols
  1. Mark Burgess says:

    Hi Eric,

    I think you are spot on – supporting VVOLs is like everything in a storage array and the devil is in the detail. There are good and bad implementations of snapshots, replication and now VVOLs – customers need to be aware that these features are not just tick boxes.

    So far I have only had a chance to look in detail at what NetApp has done with the technology, there implementation is not perfect and there is room for improvement, but it is pretty good.

    They have specifically addressed one of your points above with regard to moving VVOLs between different disk tiers by implementing something called the OnDemand engine. This moves the VVOL automatically if required without the host being aware – within an array and across arrays – pretty cool.

    They also support the option of hybrid volumes with QoS.

    I have more thoughts at http://blog.snsltd.co.uk/vmware-vvols-on-netapp-fas-is-now-available-to-deploy/

    Best regards
    Mark

  2. I have to take issue with saying that the best approach to performance contention and fluctuating performance demands is to have performance tiers and to be constantly moving your data around based on how it’s currently performing. That’s a lot of extra writes to disk just to handle performance.

    The best approach is to set up a QoS policy that allows for those periodic fluctuations within an acceptable range – as long as every app is getting what it needs. So when your app is consuming more IOPS, allow it to do so up to a sustainable maximum, or even a temporary burst if the array can handle it. And then when the application slows down, use those IOPS for another application while guaranteeing a minimum performance level for all applications.

    And if you ever need to change the QoS of a volume because of permanently new performance requirements, simply modify the QoS settings without moving the data at all.

    That’s the best method in today’s world for dealing with performance management under contention. It’s called set it and forget it, not tier it and wear it out.

  3. Eric,

    Couple of things – You mention in the beginning of your post that ‘VVols allows for the creation of large numbers of storage containers, as many as required to provide the combination of storage characteristics needed by the VMs in the environment.’ Not necessary accurate.

    I’ll use a specific vendor in this case – 3PAR. At the time of this posting, 3PAR supports only entire array allocation to storage containers, OR, entire allocation of a virtual storage domain. And, in that array or virtual storage domain, you can only create one storage container. the 3PAR VASA provider defines the capabilities to the vCenter, and allows the provisioning of VVols from within vCenter. A storage container is simply a pool of storage with capabilities defined by the VASA provider in the array to vCenter. These capabilities are the building blocks for your storage policies.

    Some of the problems that VVols was looking to solve was LUN Sprawl, VM management granularity, and enabling capabilities of a SDDC. The ESXi hosts talk to the array via a special LUN, called a Protocol Endpoint, appearing to the ESXi Host that there’s a single LUN. This eliminates the 256 LUN limit per ESXi host, allowing you to scale out to thousands of VVols per host.

    Additionally, this gives you the granularity to apply a storage policy per VM, granular snapshot capability, and file placements on the array (defined in your storage policies, naturally).

    And since this is all applied via vSphere storage policy, this speeds up provisioning and allows capability for orchestration.

    Also, at the time of this post, VASA does not advertise auto-tiering capabilities. That is something that you’ll need to speak to your storage folks about, but I understand that capability will be in VASA in the near future. So, a decision will need to be made about how tiering occurs, whether manually in vSphere storage policies, or on the array leveraging it’s features, or a combination of both.

    I believe the most important item to know about VVols currently is the inability to replicate. I hope that in the future, when replication does work with VVols, it would be nice if VASA would advertise the capability, and be applied via storage policy.

    Cheers!

  4. […] in VMware vSphere 6.0: Virtual Volumes (VVOLs) (Settlersoman) What are VVols? (Storage Switzerland) How Storage Vendors Integrate VVols (Storage Switzerland) Are VMware VVOL’s in your virtual server and storage I/O future? […]

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,697 other followers

Blog Stats
  • 1,018,403 views
%d bloggers like this: