Colm Keegan, Senior Analyst
Server virtualization has delivered many benefits to the data center but one area that has become increasingly challenging is storage performance management. Legacy storage was designed to allocate disk resources from physical storage arrays to physical server hosts, and therein lies the problem. Physical servers now host dozens of business applications on virtual machines (VMs) with differing storage I/O profiles, making the task of storage performance tuning highly problematic.
To combat this issue, storage administrators must keep a detailed, manual inventory of how storage LUNs and volumes are allocated across VMs and then attempt to identify where performance bottlenecks are occurring within the infrastructure in real-time. To stave off performance issues, infrastructure planners sometimes resort to over provisioning storage resources and/or limiting the number of VMs that are created on a hypervisor host. This flies in the face of why organizations virtualized their environments to begin with – to lower infrastructure data center costs and reduce operational complexity.
If infrastructure planners think that the storage tail is wagging the virtualized server dog, they may be right. The fact is that storage architectures have to fundamentally change to meet the new realities of virtualized server environments. In a recent webinar held between Storage Switzerland and Tintri, webinar participants were asked, “Outside of performance, what is your biggest storage challenge?”. The answers were nearly evenly split between:
- Storage Volume Design
- Storage Performance Tuning
- Increasing VM Density
- Application Performance
These storage administrators were validating that legacy storage is making it exceedingly difficult to manage all the vagaries of multi-tenant virtualized infrastructure. This threatens the ROI and TCO of a virtualization project on many fronts.
First, if storage cannot consistently deliver the performance SLAs required to support mission critical VMs, business application owners will eventually take their business elsewhere. The cloud becomes an attractive alternative and organizations like Amazon are waiting in the wings to replace corporate IT.
Secondly, if operational tasks become so arduous that it requires a good chunk of administrator time to ensure healthy operations, over time it may eliminate the server consolidation benefits realized at the outset of the project. Lastly, if VM density ratios are kept artificially low due to the limitations of the underlying storage, it will hinder business agility and drive up capital costs.
The fact is that storage paradigms have to change to accommodate the new realities of server virtualization. Storage administration needs to be greatly simplified by enabling administrators to have a real-time, end-to-end view of the virtual server environment and the storage it is utilizing. The storage system itself needs a granular understanding of all the VMs it is supporting.
Performance tuning also needs to be automated. If storage subsystems were designed to deliver 99% of I/O requests from an efficiently sized flash tier and store inactive data on lower cost storage, hypervisors could cost effectively host more VMs and administrators would be far less burdened with constant tuning operations. What’s more, it would no longer be necessary to over provision storage capacity to ensure VM application quality of service.
To learn more about how Tintri’s VMware-aware storage technology designed to work hand-in-hand with virtualized server infrastructure, please watch the on-demand version of the Storage Switzerland and Tintri webinar.