In a recent test run by HDS and audited by Storage Switzerland, an enterprise NAS system was able to successfully support 15,000 VMs. This is certainly an impressive number, and a performance spec that some larger enterprises may actually need. But what does this mean for a smaller company? Do these kinds of tests hold any real value for an organization with an environment that won’t have thousands of VMs? Let’s take a step back and look at the subject of VM density and see why these tests do mean something for most companies.
Watch the on demand webinar “Enterprise NAS for Highly Dense VM Environments”
What and Why VM density
Server virtualization has evolved quite a bit since VMware first appeared on the IT landscape. After a few years of tire kicking, most companies began to virtualize, first with test and development servers and then with their production infrastructures. These days, the focus is on improving their return on that investment, which means increasing VM density or reducing the number of physical hosts required.
VM density is important because a server designed to support virtual machines is more expensive than one that’s supporting a single application. If you can triple the number of VMs per host, and thereby cut the rate that you add hosts by one-third, the savings are significant. The problem is that the more VMs there are on each host the more their combined I/O will impact the storage infrastructure.
Flash has helped overall VM adoption by accelerating workloads, both at the server level and in shared storage devices. And now, VM-specific backup solutions have matured and brought data protection up to speed with virtualization. But in the drive to support more VMs on each physical host, the storage system has been the limitation.
SAN or NAS
Most high-density VM environments run on block storage, supported by a fibre channel SAN. There’s been a concern that NAS can’t support these highly dense environments due to NFS overhead and NAS storage system limitations. Can a NAS can do the job? Answering this question is where the HDS test comes in.
There’s no shortage of performance tests in the storage space. Many of these results are generated by systems that are put together just to hit a certain level of IOPS or throughput. They don’t translate back down in scale to the levels that most companies could actually use. The HDS test provided enough data points that a viable extrapolation is possible.
The test that confirmed HNAS’s ability to support 15,000 VMs is different because this is a “scale-right” storage system. It scales-up and scales-out providing linear performance while maximizing the storage hardware being implemented.This means it will provide the same per-node performance at one or two nodes that this test achieved with 8 nodes.
For a company looking for a storage system that can handle their 200-VM environment, this is a good fit, since a single node can support well over 1000 VMs. If they grow to 2000 VMs, they’re covered by adding another node. And let’s say they triple that, through acquisition for example; the NAS system can still scale to meet that level of demand, and then some. The way they can be sure this NAS solution can keep up with their growth is by the test results.
Big number test results can often elicit a “so what” response from many readers who don’t have an environment even close to that used in the test. But when those numbers are linearly scalable, they can be very revealing to users at all levels. In this case, HDS has shown that NAS can indeed support very high-density virtual environments, and provide a valuable solution for companies at both ends of the spectrum.
In this Storage Short, Storage Switzerland’s founder, George Crump, and HDS Global Product Manager, Paul Morrisey, provide a quick summary of the lab results from the HDS text.
For more information on this test and these results, please tune into the StorageSwiss webinar “Enterprise NAS for Highly Dense VM Environments”.