Server Side VMware Caching Gets Smarter

Increasing virtual machine (VM) density is critical for organizations looking to continue to reap the ROI benefits of virtualization; the more VMs per server the more cost effective the virtualization project becomes. There is one primary roadblock to increased VM density — Storage I/O. As the number of VMs increases so do the requests for data meaning the storage network and the connected storage system can quickly become the bottleneck. But help is on the way, in the form of server side caching and now companies like Proximal Data with their 1.1 release are making the technology smarter.

Server Side Caching Step 1

Server-side caching quickly gained popularity in VMware environments where IT managers were looking to increase VM density. By caching active data locally on the host, the I/O impact on the storage network and storage is significantly reduced. But the first generation of these solutions were typically guest-OS based, meaning that an agent had to be installed in each VM and local solid state disk (SSD) capacity had to be manually partitioned between those instances.

There was also an issue of VM migration. Since the OS has no real understanding that a VM migration is occurring it has no way to bring cached data with it or to know to flush or evict the cache when a move was triggered.

Server Side Caching Step 2

The next step in making server side caching smarter was to integrate it at the hypervisor level. This allowed for a single driver to be installed and controlled from within the VMware console. This is where Proximal Data originally burst onto the scene last year with their AutoCache 1.0 software solution.

AutoCache is an I/O caching software solution that plugs into ESXi in seconds. It works by inspecting all of the data that passes through the ESXI kernel and copies the most active blocks to cache for more rapid future retrieval. The solution is hardware neutral and can leverage either a PCIe Flash Memory Card or a drive form-factor SSD to store that active data. Once in cache all future retrievals come from that local flash which provides faster response time than hard disk storage and saves a trip across the storage network.

AutoCache is a read-cache based technology, leveraging either write-through caching or write-around caching. In write-through mode all data is written to both the local flash and hard disk but acknowledgment of that write is not sent to the application until it’s been completed on the hard disk system. This allows the cache to be pre-populated with what’s likely the most active data (data that was just written) while providing the comfort of knowing that data is secured on a hard disk. There is also a write-around mode for data that won’t ever be read again, temporal and transient data being good examples.

What made AutoCache 1.0 so appealing was its simplicity. The software installed quickly, one time per host instead of dozens of times per host, and started to work immediately. Its tight integration included a vCenter Management plug-in (seen below) that further simplified operations and reporting.

image

For a small investment in flash storage this combination of capabilities enabled administrators to increase VM densities 2-3X. It also allowed them to keep the features they liked about VMware, such as vMotion and Distributed Resource Manager (DRM), because it was aware of VM migrations and could evict the cache accordingly.

Server Side Caching Step 3

The next step in making caching software smarter is to make sure it can work with any storage device and any protocol. Many caching solutions are limited to supporting a single protocol which can be a problem because of the variety of storage interconnect options available to customers. Fibre Channel SANs still are the dominant platform but iSCSI and NAS via NFS are both gaining in popularity. In fact, many data centers use a mixture of these protocols.

Ideally, you don’t want to have to use a different server-side caching solution for each protocol that will be used in the infrastructure. Doing so would add to an already overwhelming management burden and likely increase costs. Proximal Data in its latest 1.1 release adds support for all the major storage protocols. This allows the VMware administrator to adopt one caching solution for their environment regardless of what protocols are in use now and, more importantly, what they might use in the future.

Server Side Caching Step 4

The next step in increasing the intelligence of server side caching is for the cache to load more quickly and to better manage VM migrations. Fast load caching allows the cache to more quickly return an ROI by loading the cache up with data more quickly after initial installation. It also returns the server to a high performance state more quickly after a server reboot. Once the cache has been populated it fine tunes itself making the available cache storage area more efficient.

Understanding that a migration is occurring and evicting the cache keeps data safe and frees up cache space for resident VMs. But more can be done. The biggest issue with cache eviction is its impact on performance. For example, a VM that was counting on accelerated SSD performance for its reads now has to suffer through hard drive performance until its data can be re-qualified into the cache on its new host. For a VM that was built specifically for SSD acceleration, suddenly losing it can cause a big problem.

There are a variety of techniques being developed to get around this problem but most add a layer of complexity and, in many cases, involve installing a private network for cache and mirroring expensive SSDs. AutoCache in their 1.1 release has taken an approach that we think users will find much simpler to implement.

In 1.1 AutoCache pre-warms cached data from the source host onto the destination host once a VMware vMotion event is detected. This provides vMotioned VMs with a head start on making critical data available faster on the target host when a VM is migrated. While this means data still has to be re-read it doesn’t have to be re-qualified. Compared to the cache eviction scenario described above this is a marked improvement.

Storage Swiss Take

When server-side caching first arrived on the market it seemed like an ideal solution to increase VM density. And to a large extent it was. But to broaden its appeal even further requires that the technology continue to address problems we have cited in the server-side cache design. Two of those problems are the lack of broad protocol support and the performance impact of a recently moved VM. Proximal Data, by expanding its use of its I/O IntelligenceTM technology in its 1.1 release of AutoCache, squarely addresses those problems, creating an ideal server-side solution to the VM density challenge.

Proximal Data is a client of Storage Switzerland

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Product Analysis

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,225 other followers

Blog Stats
  • 1,540,025 views
%d bloggers like this: