The three Ps to a successful VDI deployment – Persistence, Price, Performance

Virtual Desktop Infrastructure (VDI) promises to reduce IT operational costs, improve endpoint data protection and increase endpoint usability. But for a VDI deployment to be successful, it must provide the user an experience that is better than the physical desktop they are replacing at a reasonable cost.

The problem is that most VDI projects focus on creating a price-per-desktop dollar figure that can support their investment decision, instead of worrying about the user. The next wave of VDI adoption will still require sound economic justification, but will also need to provide an ‘amazing’ user experience. To do that the VDI infrastructure must deliver on the “three Ps” — Price, Persistence and Performance.

Click to watch the on demand webinar "How to Overcome the Four Obstacles to VDI Love"

Price per Desktop AND Price for the Experience

Today, VDI optimization solutions and high performance flash hardware can combine to deliver a highly dense but still highly performing virtual desktop. Said another way, the goal of all VDI projects should be to deliver “MacBook Air performance at the price of a Chromebook”. At the same time the virtual desktop should no longer be a second class solution full of compromise and disabled features. For example, most users want to leverage Windows Search functionality, but doing so can require 25 to 30% more IOPS that need to be supplied by even more storage hardware. They also want to install their own applications and run anti-virus solutions. Today’s users essentially want their own desktop, albeit a virtual one.

The quality of the user’s “experience” is about more than performance, it’s about having a virtual desktop that they feel they own. The challenge is the more the user is allowed to “own” that desktop the more performance it will demand and the more expensive it may become. Striking a balance of ownership versus performance is key to VDI success and is why the infrastructure needs to address the three Ps.

While we cover the three Ps sequentially in this article, it is not a sequential process. Each of these requirements needs to be addressed in parallel so that a cost-effective, high performing desktop can be delivered to the user; and, so IT can meet the original operational objectives for the VDI project, reducing operational costs.

Write Performance – The Roadblock to the Three Ps

Implementing each of the three Ps described below will have one critical side effect that must be addressed, they all will force behaviors that generate additional write I/O, something that’s  already at a high level in the VDI environment and costly for traditional storage to handle. It makes sense then to address how to best solve the write performance challenges of VDI first before we make it worse by implementing the three Ps.

To address the write performance penalties of the three Ps and the underlying VDI architecture itself, many designs will include the use of a server-side flash SSD or PCIe board as cache. Unfortunately, most of these solutions are read-only caches making them less effective since most VDI workloads have a heavy dose of write activity.

Even if the server-side SSD uses write-back caching to accelerate write traffic there can still be problems. First, VDI write activity can be so intense that it creates concern over flash wear out, resulting in a significant increase in the frequency of flash module replacement. And, because most write-back caching has no coordination with the VDI hypervisor, there may also be recovery risks. If there is a flash failure in the cache, the VM or the host server, while data is unique in the write cache pool, there will be corruption or possibly data loss.

In both cases (read cache or write-back cache) the solutions are not content-aware. They blindly cache data based on the caching algorithm, not the specific needs of the VDI environment, wasting premium  flash capacity on less critical data.  This lack of awareness also means there is limited or no flash capacity optimization occurring using techniques like compression and deduplication.

Finally, most server-side flash solutions have no orchestration of the shared flash tier, a function that’s increasingly common in VDI environments. This leads to double caching, both at the server and storage layer, reducing cache efficiency and further negatively impacting performance. And both flash layers will face the same write overhead challenges described above.

What’s needed is a way to optimize VDI read and write traffic before it gets to the flash tier and RAM is the ideal place for this optimization to occur, since it’s abundantly available in every server. As we discuss in our article, “Three Ways to Improve Software Defined Storage”, while most hypervisors will “consume” all available DRAM it is not effectively used. DRAM is also appealing because it’s not redundant to the shared flash investment and, in fact, can add value to it. Finally, RAM is also not susceptible to wear from excessive writes.

Vendors like Atlantis with their USX product can provide content-aware, RAM based, read and write caching. Leveraging this content awareness they can make sure the right data is in the RAM cache at the right time, and optimize RAM capacity by implementing in-line deduplication and compression. Finally, since the solution runs as a virtual machine, it is able to tap into the hypervisor’s high availability functions to minimize data exposure.

The net result is a coalesced I/O stream that is pre-conditioned for flash. Unneeded writes are eliminated and more optimal, larger block segments are sent to the flash array. With this type of software solution in place the VDI project can now focus on the three Ps (price, persistence and performance) knowing that the storage infrastructure can support that implementation.


While the user experience is key to their acceptance and adoption of VDI, price is key to project approval and both experience and return on investment are key to the project expanding beyond an initial pilot.  Users may love their virtual desktops, but if the cost to get them “in love” is too high, on a per desktop basis, then the project will never get off the ground. Price is largely determined by the storage capacity required, the cost to deliver adequate performance per desktop and the number of desktops per server.

One of the ways to address the capacity concern is to use non-persistent or stateless desktops. In this configuration desktops are allocated from an available pool, providing very efficient capacity utilization but almost no personalization without the additional software. Non-persistent desktops are more commonly found in call centers where personalization is not required. But for knowledge workers in the enterprise, personalization is important and therefore most VDI implementations try to deliver persistent desktops as we will discuss in the next section.

In the past, creating persistent virtual desktops required that enough storage capacity be allocated for each user, which, when applied across thousands of desktops could generate a massive capacity demand, driving up the cost. Fortunately, most VDI solutions provide a method to address this problem right in their software, at no additional charge, using a combination of linked clones and thin provisioning. But each of these features takes a toll on storage performance, adding to the write performance problems discussed above.

These write problems have led to the rise of flash based storage systems in VDI environments. The problem is that the flash system is not able to achieve its full potential because it now has to handle the extra I/O load of thin provisioning and clones. This is especially an issue in VDI deployments because that extra storage performance potential could be put to good use, namely supporting more virtual desktops per host.

Essentially the processes that perform these functions need some assistance so they don’t bog down the flash array with unnecessary I/O load. That assistance comes in the form of the storage optimization software that leverages RAM as described above. Implementing this type of software along with a flash array allows for higher density and more VDI seats on a given flash array and for that flash arrays to sustain a higher level of reliability because less write I/O is sent to it.


Persistent desktops, as we discuss above, are virtual desktops that are essentially reserved for each user. Non-persistent or stateless desktops are allocated from a shared pool. The advantage of a persistent desktop is that the user can personalize it by loading the applications that they want and adjusting settings to their preference.

For many environments persistent desktops are key to successful VDI adoption by providing the user with a sense of ownership of their virtual environment. But there has been some objection to persistent desktops for two key reasons.

The first is an impact on capacity if these desktops are hard-allocated, since the capacity requirements of thousands of virtual desktops can be staggering (and expensive). As discussed above, this can be overcome by using built-in hypervisor features like thin provisioning and clones. But these features increase write performance demands and often lead to large-scale flash deployments. Again, using software that leverages server RAM to condition this increased write I/O can eliminate the problem and make overall performance better.

The second objection is that users empowered with persistent desktops will want to enable features like search and anti-virus to improve productivity. But these features require additional performance which we will address in the next section.


A high performance VDI environment is the key to enabling capabilities that will drive the price per desktop lower. A high performance VDI storage infrastructure allows for maximum capacity utilization and maximum virtual desktops per host. Lowering the amount of flash capacity and the number of physical hosts that need to be purchased improves the return on investment.

Performance is also critical to user acceptance (which is key to adoption). And if it’s delivered correctly, performance can actually move the user from simply accepting their virtual desktop to loving their virtual desktop. The key though is that VDI storage designers need to raise their standards for what “acceptable” per desktop performance looks like.

Most VDI storage infrastructures are sized based on an assumption that the environment only has to deliver the equivalent of desktop hard disk performance. But, today, most users have some sort of solid state storage device or devices. Most laptops are now running on SSDs and all tablets, the potential desktops of the future, use solid state storage as well.

As mentioned above, the new goal of VDI storage design should be to deliver MacBook Air performance at a Chromebook price. To achieve this, a goal which seems to be in conflict with itself, would require combining a high performance flash array with software that can optimize the I/O stream before it ever leaves the physical server hosting the virtual desktops. In a recent test Atlantis, leveraging IBM FlashSystem storage, was able to achieve this level of performance for $190 per desktop beating the $200 price point of a Chromebook with far more functionality.


$190 for MacBook Air performance should be very attractive to many organizations. It is important to note that this solution can deliver the ideal user environment while meeting IT’s objectives for virtual desktops, reduced operational expense and increased organizational security. The key is to use a combination of server-based storage software to condition and optimize the VDI I/O workload with high performance flash arrays like IBM FlashSystem. The two working together enable the three P’s of price, performance and persistence.

Watch On Demand

Watch On Demand

This Article Sponsored by Atlantis Computing

Click Here To Sign Up For Our Newsletter

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,514 other subscribers
Blog Stats
%d bloggers like this: