Making Storage the VDI Solution, Not the Problem

Virtual Desktop Infrastructure (VDI) has found a niche in call-center types of environments where very large numbers of workers run essentially the same desktops. But in the broader market VDI hasn’t seen the same success, a fact typically blamed on the high cost of supplying storage performance that’s adequate to deliver a satisfactory user experience. When VDI projects get going these competing cost and performance requirements often derail the initiative. VeloBit, with their new vBoost product, has a solution that may move storage out of its perennial role as scapegoat and into the role of “VDI enabler”.

VeloBit claims that its new VDI-specific software, based on their mature HyperCache product, can significantly increase virtual desktop density and do so on moderately configured servers with inexpensive SAN shared storage. Essentially, they believe they’ve resolved the cost vs. performance conflict. The goal of our lab test was to determine whether a practical VDI system could be deployed within a limited budget ($50K) by measuring how many virtual desktop instances could be hosted on a vBoost-powered mid-range VDI server.

What is vBoost?

vBoost is a software-based caching solution that leverages DRAM as its primary cache tier. DRAM, since it does not have the same write penalty as flash or RAID hard drives, is ideal for the write-heavy I/O profile of the desktop environment. And thanks to the data reduction capabilities commonly included in most virtual desktop hypervisors (Golden Masters and Linked Clones), utilization of premium-priced DRAM is very efficient. On top of this VeloBit creates even more efficiency by both deduplicating and compressing data in the RAM cache area. Finally, vBoost can also leverage SSD inside the server as a secondary tier for a cost-effective expansion of the DRAM cache area.

The VDI Testing Challenge

There are few scenarios in which Storage Switzerland finds it more difficult to develop meaningful tests than in the virtual desktop environment. We are believers that all testing has to have some relationship to the real world and creating a realistic testing scenario for even 50 desktops is very difficult. In fact, in the past we had suggested to people that the only valid test was to bring the environment into production, which of course exposes the VDI project to all sorts of criticism.

The tests conducted by many storage vendors have focused on parameters like how many desktops they can boot in 30 seconds. While interesting these tests have limited practical value. Certainly, the time to readiness during the morning boot storm is important but it’s how the storage infrastructure responds to constant read and write requests throughout the day that will determine how well a VDI project is received by users.

To that end VeloBit and Storage Switzerland leveraged LoginVSI to create a more realistic user environment, one that tests the “insides” of the virtual machine. LoginVSI actually simulates various user interactions, like logging in, checking email, creating, editing and printing documents, and then logging out. LoginVSI is the ‘gold standard’ in testing a storage system’s ability to support a highly dense virtual environment.

The Testing Environment

For our test we used a Dell PowerEdge R820, a 12th-Generation Dell server. This is a 4-socket server with six cores per socket configured with 256GB of DRAM. The total cost of this configuration was about $16,000.

As is typical of VDI environments, CPU utilization was very low throughout the test. However, as our results will show, with more RAM enough virtual desktops could be created and used to increase CPU utilization.

The storage system used was an inexpensive Dell MD 1000 array configured with 15 x 15K RPM 136GB SAS drives directly attached to the server via a PERC 6/E controller. In the first run of the test, which was conducted without benefit of VeloBit vBoost, the speed of the storage infrastructure was obviously important. In retrospect, our test could have used a more robust storage device with some SSD installed. But considering our testing goal was to keep the customer investment under $50K, that would not have been practical.

The hypervisor was running XenDesktop 5.6 and the Windows 7 virtual desktops were configured as non-persistent desktops. Each LoginVSI test was configured to try to run with 250 desktops. As our “after” picture shows, with VeloBit enabled the system could have supported even more desktops with a more balanced configuration. However, given the goal of the test (to use a sub-$50K environment) and the amount of time available for testing, we did not pursue configuration changes. We believe that even with the unbalanced configuration, the test data successfully demonstrate the impact of VeloBit.

The Testing Process

Each test was run the same way. In the first test, the VeloBit vBoost cache was inactive. In the second the cache was activated. This means that the non-cache configuration could use the same 256GB of DRAM (if it had the intelligence to properly leverage it). In other words, it did not know how to turn that extra DRAM into a cache area.

Each test run would log in a virtual desktop every 30 seconds and then run through the wide range of user steps. This meant that each desktop was doing something slightly different throughout the test, which is exactly how the real world works. As usual, running tests of this longer duration is a bit like watching paint dry. So for the most part we just checked in at the end of each test run.

LoginVSI repeatedly logs in to each desktop (until it reaches the desktop count of 250) and runs a workload loop, continually measuring performance and latency. As it adds desktops, the latency for the test increases. LoginVSI gives each configuration a score called “VSIMax” which is the number of VMs running when the acceptable latency threshold is crossed. Essentially, VSIMax indicates the maximum number of virtual machines that the configuration will support while still providing adequate performance (a level that will keep users happy).

Testing Results

Each test was run for the same amount of time, and we measured the VM performance in both cases. The first test, with no vBoost, established a VSIMax mark at 55 VMs. The second test, with vBoost, didn’t reach the same latency threshold even after 250 desktops were active, indicating that substantially more than 250 virtual desktops could have been loaded in our configuration.

This is obviously a significant difference. The limiting factor in the non-cached system is the storage I/O bottleneck, rather than RAM or CPU constraints. This means that as more hosts are added random I/O to the storage system will accumulate, forcing the user to confront the cost / performance conflict. Either they pay to upgrade or replace the storage system or accept lower performance and a lower VM density.

Without caching the system administrators would have to buy another MD 1000 storage shelf and an additional storage controller for every 50 or so virtualized desktops. This means in a 1,000 user environment, approximately 20 shelves of drives would be required. With caching it’s a different scenario.

In this test, the VeloBit-enabled host supported roughly five times more desktops in the same hardware configuration (250+ vs 55). This means that the 1,000 desktop organization described above could be provisioned with one-fifth as many drive shelves. And again, with more time and more RAM we could have seen even better results.

More importantly, with vBoost in place the number of desktops and hosts could scale almost indefinitely without the need to replace the storage system. Net traffic into the storage infrastructure was minimal.

The next iteration of our test should be done using persistent desktops but we would expect that to yield very similar results. Also, the vBoost configuration could be extended by adding more DRAM which should lead to an even higher desktop-per-host ratio. At some point we may hit diminishing returns by adding DRAM, but I don’t think we have reached that point yet.


The impact of this test is significant. It opens up VDI to whole new realm of organizations. In our experience, the theoretical break-even point for justifying the effort and expense of a VDI project is ~500 desktops. With vBoost this is no longer the case. A small server with a little extra DRAM could support a hundred desktops very cost effectively, making VDI affordable for many smaller environments.

One of the most important ramifications of this test is the performance that each of these desktops saw. Storage Switzerland believes that one of the biggest causes of VDI project failure is a lack of user acceptance due to disappointing performance. This is because the “IOPS-per-desktop best practice” being provided by desktop virtualization companies has no basis in current reality. The latest user experience is now based on solid state, not SATA drives, thanks to the widespread adoption of SSDs in notebooks and tablets. People now expect faster desktops.

vBoot allows the VDI administrator to confront this increased expectation head-on. Each VDI user now essentially gets a RAM disk to work from. Instead of “living with” VDI performance, users may demand it. It should easily provide them with better CPU and storage I/O capabilities than what they are experiencing on their latest-generation devices.


The first goal of this test was to prove that the inclusion of VeloBit’s vBoost would significantly improve virtual-desktop-per-physical-host density. Clearly, an increase of at least 200 desktops per server confirms this goal was accomplished.

The second, more ambitious goal, was to prove that VDI can be more easily justified from a capital expenditure perspective instead of the classic operational justification. If proven this would lead to more aggressive roll-outs of VDI in more organizations since a CAPEX savings is a stronger basis for approval than operational savings. Given the configurations listed above and the 5x gain in VM density that we saw, vBoost clearly allows a CAPEX-only case to be made for VDI approval. And of course, IT can still reap all the operational rewards from the project.

Our testing proves that the more intelligent use of DRAM in combination with shared flash or even hard disk drive storage can make the VDI project one that reduces capital desktop expenditures instead just operational costs.

VeloBit is a client of Storage Switzerland

George Crump is the Chief Marketing Officer of StorONE. Prior to StorONE, George spent almost 14 years as the founder and lead analyst at Storage Switzerland, which StorONE acquired in March of 2020. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Prior to founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,787 other followers
Blog Stats
%d bloggers like this: