What are VVols?

With vSphere 6, VMware is set to address one of the biggest storage management problems facing the virtualized environment, associating virtual machines (VMs) with the storage they are using. VVOLs provide that visibility and allow storage and server administration to be more easily merged. But VVOLs alone do not level the storage playing field, as there will still be dramatic differences between storage systems, so it’s important to understand how these systems support VVOLs and what value they can deliver to the IT professional.

The VVOL Architecture

There are four components to the VVOL architecture: Storage Containers, Protocol Endpoints, Storage Providers and Storage Policies. Storage Containers are ‘subsets’ of storage on a storage array that are associated with specific capacity and performance capabilities. The containers are similar to what are currently called VMware “Datastores”. Each array needs at least one container but most will have many as they correspond to the various storage service levels required of VMs.

Protocol Endpoints are the I/O transport mechanisms for hosts to access the storage containers. Endpoints can be thought of as an I/O de-multiplexer since there are potentially thousands of VVOLS communicating to hundreds of VMs residing on dozens of hosts.

The Storage Provider, or VASA Provider, is an out of band communication mechanism between VMware vCenter and the storage array. It is the responsibility of the storage vendor to create a storage provider for their storage systems implemented through VASA. vSphere Storage APIs for Storage Awareness (VASA) is a set of APIs that VMware has created that permits storage arrays to integrate with vCenter for Management functionality.

Storage Policies are set up based on an application’s needs; typically performance, availability and data protection level. These capabilities are made visible as a result of the VASA API. As VMs are created, storage is allocated to the VM through a VVOL that is assigned a specific storage policy. That storage policy can be changed at any time based on the needs of the application.

What’s the Problem VVOLs Solve?

Most storage architectures that support a virtualized infrastructure are block based and accessed by either iSCSI or Fiber Channel (FC) protocols. Typically, LUNs are assigned to the VMware cluster, then multiple VMs across multiple hosts use that volume to place their VMDKs. Some environments will create different LUNs or volumes for the different types of storage media that may be available to the storage system. For example, a high-performance LUN can be created from flash storage on the array and a more basic hard disk LUN can be created from the hard disk tier. All the VMs are then, somewhat generically, poured into the available LUNs, making it difficult to analyze storage resource consumption and the needs of a particular VM.

The problem is that at the point of VM creation it’s not possible to manage the storage at a VM level, but only at a LUN/Datastore level. Additionally, new Datastores need a server administrator and a storage administrator to coordinate the storage association. Even more problematic is when a specific VM needs to have its performance boosted or downgraded depending on the current storage I/O load. While Storage vMotion is impressive, it does not provide an automated way to move a single VM based on changing I/O requirements.

There is also limited facility to ensure that mission critical VMs’ I/O requirements are met, since there is no way to differentiate performance between VMs within the LUN.

The Workarounds

There are three basic workarounds to this problem. The first is to use more LUNs. A unique LUN could be created for each mission critical application to make sure that it’s not impacted by other VMs, as would be the case on a shared LUN. This approach still does not address the automated movement of data in response to changing I/O requirements and creates a storage management nightmare as the number of LUNs to be managed potentially grows into the hundreds or even thousands.

The second workaround is to overcompensate by providing much more performance than would ever be needed by any particular VM. This approach is exemplified by the all-flash array. If all the VMs are on all-flash, all the time, then there is no need to balance performance. Basically all VMs get “Gold” service whether they need it or not. While this approach essentially solves the problem, it does so at an expense that’s too high for many data centers.

The third option is to use a network attached storage (NAS) system. Since VMware can support NFS based file systems, a single volume can be mounted to the infrastructure enabling VMs to be controlled on a per-VM basis. In fact, many NAS systems allow for the automated or manual locking of particular VMs into flash to assure higher performance. There are multiple drawbacks to a pure NAS approach such as application compatibility, bandwidth capabilities and the lack of T10 UNMAP

VVOLs To The Rescue

VVOLs provide VM awareness to block-based systems, extracting the VM from the LUN. Behind the scenes the storage system is essentially creating a volume for each VM, but VVOLs masks this process from both the storage and VMware administrator.

When implementing VVOLs, the storage administrator creates a storage container that can be assembled from the available storage media in the storage system. A common example used to describe a VVOL configuration is three separate storage systems that seemingly work together, allowing a VM’s performance profile to change over time. In this example an all-flash container can be created for VMs demanding high performance, a hybrid (flash and HDD) container can be made for VMs that need more standard performance or a container can be comprised solely of high capacity HDDs for low performance or legacy VMs.

While using separate systems is a useful way to describe the software-defined nature of VVOLS, it does not effectively communicate the performance impact of a VM being migrated from one storage platform to another. A hybrid array, on the other hand, if it fully supports VVOLs, could simply adjust its internal settings and require no data migration at all.

When a new VM is created it is assigned to one of these storage pools directly from the vCenter interface. This means that the storage administrator can pre-configure the storage array for use by the VMware administrator, and then doesn’t need to be involved each time a VM is created. If the performance demands of a particular VM change, it can be changed on the fly from within the VM itself.

Each VM will have three or more VVOLs associated with it. Typically there is a VVOL for the VM configuration, a VVOL for the temporary and swap files and at least one or two for the actual data associated with the VM. Each of these can be assigned to a container with a different class of performance. This allows for temporary system files to go to high performance storage with low redundancy, but critical application files to be stored on medium performance storage with high redundancy.

VVOLS are Not A Cure All

As big of a step forward as VVOL technology is, it’s not an answer all for the problems facing the storage infrastructure. Most of the diagrams that VMware uses to describe VVOL implementations show a picture of three storage systems on a storage network, typically a hard disk based array, an all-flash array and a hybrid array. If the storage administrator follows that architectural design for VVOL implementation they may be buying, or at least managing, all three systems. This means that if there is a policy change data still needs to be migrated between systems. Even if the different tiers can be built out of a single storage system, data still needs to be moved between the various tiers in order to accommodate the need for accelerated performance.

As a solution some manufacturers have moved to the next level of VVOL integration. These systems allow the storage containers to be created from a single storage system and then, based on the I/O expectations of the container, adjust performance by changing flash allocation. The result is that the VVOLs associated with each VM in that container can have their performance profiles adjusted without a data migration. Instead of moving data the storage system adapts to the needs of the VM.

Conclusion

Although NAS systems will benefit from VVOLs, the technology essentially levels the playing field between block storage and NAS storage systems by providing the granularity of control that block storage has previously lacked. This should also save a tremendous amount of time in the creation of each VM, allowing the storage allocation process to be conducted entirely by the VMware administrator, but within the policies set by storage administrator. It is also important to understand that while many storage vendors’ products will be “VVOL compatible”, there will be varying degrees of capability between them. This can range from the transparent management of a LUN per VM to complete support, where the storage system adapts its flash allocation based on VVOL demands. Depending on the storage array’s level of VVOL integration and the native capabilities exposed by VASA, the VM granularity will allow for management of storage SLAs per VM.

About NexGen Storage

NexGen builds hybrid flash arrays with data management capabilities that enable their customers to prioritize data and application performance based on business value. The goal is to avoid the high cost of all-flash arrays that treat all data as high-value data. Their integration with VVOLS allows for even more intelligent data prioritization without the penalty of having to move data between different physical storage systems.

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , , , , , , ,
Posted in Article
One comment on “What are VVols?
  1. […] storage (SearchVMware) What’s New in VMware vSphere 6.0: Virtual Volumes (VVOLs) (Settlersoman) What are VVols? (Storage Switzerland) How Storage Vendors Integrate VVols (Storage Switzerland) Are VMware VVOL’s […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,429 other followers

Archives
Blog Stats
  • 796,316 views
%d bloggers like this: