Guaranteeing Application Storage Performance When Virtualizing

It’s no surprise that traditional storage performance falls off a cliff in heavily virtualized server environments. Originally designed to interface with a limited number of hosts, legacy dual controller storage architectures simply cannot meet the I/O workload demands of multiple virtual machine (VMs) application instances. To avoid storage I/O bottlenecks, Hyper-V virtualized infrastructure planners need to consider newer storage paradigms that can dynamically optimize the storage I/O path between virtualized applications and the storage resources assigned to them.

Bottoms Up

Traditional shared storage systems typically take a “bottom up” view when provisioning out storage resources to virtual machines (VMs). Its sole task is to dole out access to disk LUNs or volumes to servers without performing any analytics or discovery of what the I/O profile is for the actual application. In short, all the intelligence is at the array level and there is limited insight into what the I/O needs are at the VM layer.

This is a recipe for disaster as there could be a wide range of I/O workload characteristics across all of these systems. For example some VMs may have a need to do a low volume of large sequential reads while other VMs may need to do a high volume of small random writes and still others may have almost no storage I/O traffic.

When accessing the same LUN or volume, concurrent VM I/O requests can bring the controller to its knees; resulting in significantly degraded application performance. Known as the “storage I/O blender”, this issue is common to virtualized environments and unless remedied, it can severely impact the ROI of a virtualization initiative.

VM Driven Storage

Interestingly, various industry sources cite that the number one concern amongst IT administrators supporting virtualized infrastructure is application performance. To solve for the limitations of legacy storage, some storage administrators over provision disk spindles or add high-speed solid state disk or flash devices into their arrays. While this may deliver some incremental performance improvements, it also drives up the cost of these environments. Furthermore, this approach does not fully alleviate bottlenecks at the array controller level or within the hypervisor itself.

Virtualized application infrastructure requires a storage solution that is capable of distinguishing the unique I/O characteristics of each individual VM so that the allocated storage resources can be customized to meet each VM workload accordingly. In fact, this “top-down” approach to storage management is something that server virtualization suppliers have started to integrate into their offerings by packaging storage management services, like snapshots and replication, at the hypervisor or guest operating system (OS) level. This allows these services to operate on a per VM basis.

Holistic VM Storage Management

The next logical step therefore is to move data placement services out of the traditional storage array and into a virtualized storage controller that operates at the hypervisor host so that a virtual storage stack can be created for each VM. These virtual storage controllers would then connect to storage nodes in a virtualized pool of storage resources to enable end-to-end intelligence between the virtual application, virtual storage controller and the storage resources.

By having direct insight into the specific I/O patterns of each individual VM, the virtual controller can optimize I/O and coordinate data movement between the virtual server and the storage capacity on the back end. This not only helps eliminate the storage I/O blender but since it is automated, it relieves storage administrators from having to perform tuning operations that are mostly reactive in nature. In other words, critical applications are less likely to hit storage performance walls (when storage I/O cannot keep up with demand) and suffer periods of degraded performance, under this top-down architecture design.

VM Prioritized I/O

Another way to help guarantee application quality of service (QoS) is if the virtual storage controller enabled administrators to assign different levels of priority access to storage resources. For example, mission critical business applications could be assigned Platinum level access, business critical systems could be granted Gold level access and all other applications might be given Silver level access.

In this manner, I/O prioritization would occur before data even left the individual VM. This is an important capability as today’s multi-tenant VM environments are susceptible to situations where one VM can start monopolizing storage resources to the detriment of all the other applications on the hypervisor; otherwise known as a “noisy neighbor”.

The software, however, should be nimble enough to allow virtual administrators to modify profile assignments on an as needed basis to accommodate applications that may be experiencing additional latency. So if an important business application needed to be promoted from Gold to Platinum service to get a higher service level, the virtual controller software could be modified accordingly. In addition, the solution should be flexible enough to support physical server environments and provide granular storage resource control. For instance, an administrator may want to dedicate a virtual storage controller to a mission critical VM to ensure the highest QoS possible to guarantee application performance.

Getting On the Grid

Ultimately, storage capacity and performance needs to scale linearly to accommodate the growth of virtualized infrastructure. As more VMs and hypervisor hosts are added to the environment, the corresponding storage resources also need to grow in lock-step. Ideally, a pool of virtualized storage resources is presented to all hypervisor servers. Storage I/O read and write activity would be automatically load-balanced across the entire grid. As more nodes are added to the grid, virtual environments would benefit from massive parallel computing and I/O streaming.

Furthermore, there could be a mix of high performance storage nodes with flash and SSD resources, and “capacity” storage nodes, configured with high-density disk drives. High network I/O throughput could also be achieved by attaching each storage node to multiple 1GbE or 10GbE networks connections. This would provide infrastructure planners with the optimal balance of storage resources for meeting the varied VM I/O requirements and help to guarantee performance of the virtual business applications in their environments.

Through parallelism, data could be striped across all the nodes in the grid for data redundancy and high performance. Each virtual controller would break up read and write I/O into multiple separate chunks and place them on the appropriate storage node resources to match up with the QoS performance policy of the underlying VM. This segmentation of the data would ensure that no single storage node would be a performance bottleneck or go underutilized as the virtual application environment scales out.

Conclusion

Storage technology needs to evolve to meet the growing and unpredictable I/O demands of dynamic, virtualized server environments. Legacy storage systems which take a bottom-up approach to allocating storage, force infrastructure planners to over provision storage resources and artificially limit the number of VMs that can be assigned to each hypervisor host. This stunts business agility and reduces the ROI of a virtualization initiative.

With hypervisor software vendors, like Hyper-V, placing data storage services directly into the Hypervisor or Guest OS level, storage administrators can configure these services on a per VM basis to uniquely match the data protection needs of their individual business applications. Likewise, VM optimized storage offerings, like those from Gridstore, are following this trend by enabling virtual and storage infrastructure planners to allocate and configure storage resources based on the specific storage I/O and QoS needs of each application in their environment.

By deploying a grid based storage architecture which scales performance linearly as nodes are added to the virtual storage infrastructure, IT organizations can achieve higher VM density across their hypervisor hosts and more rapidly respond to business demands. As importantly, by assigning and managing storage resources at the virtual storage controller layer, virtual application QoS can be assured; helping businesses to consistently meet service level agreements (SLAs) even as VM workloads change over time.

The ability to provision, manage and prioritize storage performance on an individual VM basis is what constitutes true software defined storage (SDS). Gridstore’s offering is helping to deliver these capabilities to enable multi-petabyte scalability in Hyper-V environments while guaranteeing application performance.

Gridstore is a client of Storage Switzerland

As a 22 year IT veteran, Colm has worked in a variety of capacities ranging from technical support of critical OLTP environments to consultative sales and marketing for system integrators and manufacturers. His focus in the enterprise storage, backup and disaster recovery solutions space extends from mainframe and distributed computing environments across a wide range of industries.

Tagged with: , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,221 other followers

Blog Stats
  • 1,640,514 views
%d bloggers like this: