Storage QoS – A New Requirement for Shared Storage

In the virtualized data center shared storage is truly shared. Each attached server host may have dozens of virtual machines all accessing the same shared storage system at the same time. This creates a new requirement for shared storage systems, how to provide a guaranteed level of storage performance to mission critical applications as they are virtualized.

Sharing the Storage Chassis

In IT we’ve been sharing storage for a long time. Originally, this meant consolidation of the capacity that was formerly direct attached to servers, onto a common network-attached array. In the one-application-per-server data center this design worked pretty well. It was really only physical sharing of the storage infrastructure, actual capacity was hard allocated to each server most often with dedicated disk drives or storage volumes. But performance wasn’t shared either since throughput (and especially IOPS) was a function of the number of spindles in a RAID group or storage volume and each server essentially had its own disk drives and a dedicated path to those drives.

Sharing storage performance

In traditional disk arrays storage performance was constrained more by disk drive latency than by the storage controller. But with the advent of flash, disk drive latencies have been masked behind sophisticated caching and tiering algorithms, making the controller a gating factor for storage performance as well. This also makes it easier to connect more servers to the shared array and, in turn, easier to overwhelm the array’s ability to service storage requests. Since performance, in a flash environment, is no longer a function of spindle count, hard allocating disk drives to each server is no longer a guarantee of performance for each individual server.

Volatile Workloads

In economics terms the storage system represents the ‘supply’ side of the equation. On the demand side server workloads are changing as well. Instead of a stand-alone server running a single application, virtualization has created very dense compute clusters running tens or hundreds of virtual machines. The aggregate workloads from these virtual machines are much more volatile than the stand-alone servers than shared storage resources a decade ago.

To manage this situation storage systems must provide the performance to support these compute clusters and an assurance that each virtual server will receive the performance it’s expecting, consistently. What’s needed is a quality of service (QoS) function within the shared storage infrastructure.

Sharing in the Cloud

Enterprise computing is moving to cloud-based data centers at an accelerating pace. As companies struggle to deal with rising IT costs and shrinking budgets they’re finding the cloud alternative more and more appealing. But cloud-based storage infrastructures are redefining “shared storage” yet again, and bringing a new term to the IT vernacular “multi-tenancy”.


Unlike the data centers of even a few years ago, storage systems and the servers that connect to them are increasingly virtualized. This has greatly increased the number of server instances that can be connected to a shared (“multi-tenant”) storage system and amplified the potential problems along with it. As these dynamic workloads compete for finite storage resources the performance that applications receive can fluctuate, sometimes wildly, with a negative impact on cloud customers.

These three factors: more volatile workloads, less controllable storage system performance and true multi-tenancy have created a perfect storm of sorts. Applications are not getting the performance they need and IT can’t manage around the problem the way it used to. What’s needed is a more powerful way to assure storage performance for every host in a multi-tenant environment.

The concept of quality of service is now being used with storage systems. Its ability to control the storage resources that impact storage performance is making it a sought-after feature in multi-tenant arrays that are being implemented in enterprise data centers as well as the cloud. In the next column we’ll look at multi-tenancy in more detail and the different ways QoS is being applied in the storage industry.

Click Here To Sign Up For Our Newsletter

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , ,
Posted in Blog
One comment on “Storage QoS – A New Requirement for Shared Storage
  1. […] among multiple users. In the days before flash and high-speed cache and tiering solutions, however, only the capacity was consolidated, notes Storage Switzerland’s Eric Slack. These days, with dense clusters running hundreds or even […]

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,220 other followers

Blog Stats
%d bloggers like this: