In a recent webinar, QLogic’s Steve Garceau accurately described the advantages of a Virtual Desktop Infrastructure (VDI) strategy, as well as the challenges. While there is the potential for cost savings, the primary benefit that data centers will see when embarking on a VDI initiative is increased operational efficiency and potentially increased data security. The primary challenge to these gains though is user adoption. If users won’t embrace the VDI initiative, then these gains will obviously be meaningless.
The key to gaining user acceptance of VDI is performance of their virtual desktop; storage is the primary inhibitor to being able to deliver a positive user experience. The problem is that most VDI storage performance upgrades are costly and disruptive. This can threaten the return on investment calculation that was used to gain approval for the VDI project in the first place.
Flash To The Rescue
To resolve the storage performance challenge that VDI presents, many VDI project managers have resorted to adding some form of solid state disk (SSD) to the infrastructure. While SSD does raise the storage cost of VDI, it does tend to resolve many of the performance challenges. If SSD can be implemented correctly, then the number of virtual desktops per host can be increased, potentially offsetting the increased cost of storage. SSD can also improve user adoption since their virtual desktops may actually perform better than their former physical desktops.
The Flash vs. VDI Challenge
For flash not to negatively impact the VDI ROI requires that it be implemented as efficiently as possible while still improving performance to the point that the above mentioned increased density in virtual desktops can be achieved. For many environments, this is going to mean utilizing flash as cache so that the existing storage footprint can be leveraged. Buying a brand-new storage system or even adding SSD to an existing name brand system can be very costly, to the point that it would negatively impact the VDI ROI.
For good reason, using flash as a cache is an attractive alternative to buying a new storage system or performing an expensive upgrade to a current storage system. With caching, only a small percentage of storage capacity needs to be flash based. The caching software automatically moves the most active data to the flash SSD. This lowers the cost and helps to intelligently identify which data sets are cache worthy.
As a result, the industry has sprung forward with a plethora of caching add-on solutions. There are network based cache solutions and server side cache solutions. In fact, there are so many offerings that it can be downright confusing for the overworked IT professional to pinpoint which of these solutions would be the most appropriate for their VDI needs.
Network caches are an all-inclusive solution and have the advantage of working across a variety of storage systems. They also do not require anything to be installed in the host, but all the data must still traverse the storage network. Also, the majority of these solutions are IP or NFS based, where the large majority of VDI implementations use fibre channel for their storage network connectivity.
Server side caches have the advantage of significantly reducing the amount of data that has to go across the network. While most server side caches are read only, write performance will also improve since the storage network becomes almost exclusively dedicated to writes. This type of caching requires that two components be installed in the server. First, the SSD storage media itself has to be selected. The IT planner has to decide between lower latency but more expensive PCIe SSD or higher latency but less expensive drive form factor SSD. Then caching software has to be selected and installed.
Caching software can be installed at different areas within the software stack, including at the hypervisor kernel, as a virtual machine or within multiple virtual machines. Even after the software selection process is done, there is some concern as to how running the caching software on the server will impact performance. Again, there are pros and cons to each; which approach is most appropriate will vary from data center to data center.
A Cache Alternative
An alternative that may provide the best of both worlds are storage host bus adapters (HBA) that integrate cache right on to the card. The QLogic FabricCache, discussed in the above webinar, is an excellent example of this technology. This type of implementation means that no caching software needs to be tested and installed, since it is integrated on the same HBA driver stack already in use. It also shares the benefit of other server side caching in that it can eliminate redundant read traffic from spanning the network.
At the same time, a caching HBA also provides similar value that network caches do. Since they operate at the HBA level, any storage SAN attached storage device that they connect to can be accelerated as part of the cache. QLogic’s implementation even provides a cache sharing function, in that clustered servers with FabricCache cards can share each other’s cache contents thus accelerating access to a larger data pool and further reducing the I/O demands placed upon the network storage device.
The storage infrastructure that supports the VDI environment plays a critical role in user adoption of their virtualized desktops. Thanks to the capabilities of flash storage that role can be maximized. Caching allows flash to be used efficiently but many implementations are disruptive. The key for data centers is how to implement caching without breaking the bank and causing too much disruption. Flash enhanced HBAs, like those available from QLogic, may provide the ideal way to accomplish these seemingly at odd goals.
QLogic is a client of Storage Switzerland