Maximum VM Density Requires Visibility

Most data centers have fully embraced server virtualization, with many data centers claiming over 50% of their server environment being virtualized. The early success of this technology has been unprecedented in IT terms, but now a key next step should be taken to maximize virtualization’s already impressive ROI. The virtual machine-to-host ratio, also known as “VM density”, has to increase significantly.

Watch our on-demand webinar "Max VM Density Requires Optimal Storage Networking & Operational Transparency"

As we outlined in our paper “The Top 5 Requirements of a Next Generation Storage Network Architecture” today’s storage network is under pressure to perform from all sides. On the compute side IT initiatives like Big Data, scale up databases, as well as server and desktop virtualization, are able to generate more I/O demand than ever. And on the storage side, flash assisted storage systems can now readily respond to that I/O demand.

In the middle is the storage network and it can become the bottleneck if it is not upgraded at the same pace as the compute and storage infrastructures. But as we explain in our paper, increasing bandwidth is only one aspect of the storage network that needs to improve. Administrators also need granular control, visibility, scalability and reliability. Of these visibility is often the most lacking and a case can be made that it is the most important, especially as we ask each host to support an increasing number of virtual machines or an increasing number of users.

Before virtualization, the conventional wisdom in the data center was one application per server. This “shared nothing” topology meant easy troubleshooting and reduced risk if there was a failure somewhere in the environment. But is also meant very poor resource utilization, especially for computing resources.

Virtualization is the cure for under-utilization, especially as data centers move toward greater VM densities. But increasing the number of virtual machines that each host supports also  removes much of the headroom that IT professionals have counted on to accommodate a sudden spike in performance demands. The same challenge is true for database applications that are scaled to meet higher and higher user counts. Dealing with an unexpected spike in demand in a way that it does not impact the user experience is critical.

While the other four requirements of next generation storage networks do help provide more headroom, visibility is required to make sure that the additional bandwidth is being allocated to the virtual machines that actually can take advantage of it. Storage network management should also should provide insight and trending as to what is coming next, so the future spikes in performance can be planned for and managed through. IT is at its best when it is proactive, solving potential performance problems that are on the horizon, not reacting to problems that have already occurred.

You can access our white paper here, and you can listen to our on-demand webinar about how the next generation storage network can increase the virtualization ROI here.
Watch On Demand

Watch On Demand

 Brocade is a client of Storage Switzerland

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,245 other followers

Blog Stats
  • 1,555,727 views
%d bloggers like this: