Automated Caching for the Virtualized Data Center

Many industry observers estimate that the server infrastructure in a typical data center environment is approximately 50% virtualized. As virtual machine (VM) density increases, conventional storage platforms are wilting under the pressure of managing highly randomized storage IO patterns. Indeed, the “storage blender” effect, of unpredictable, high volume read and write IO activity emanating from VMs, threatens application quality of service (QOS) levels. As a result, server administrators are increasingly turning to server side caching or SSD solutions to meet performance objectives.

Solid state disk (SSD) and flash memory based storage technologies, offer business application owners a safety valve for responding to the storage IO pressure experienced in highly dense VM environments. Server PCIe-based Flash cards and drive form factor SSDs, provide a targeted approach to delivering very high performance storage resources directly to where the application resides – within the hypervisor host itself. Furthermore, since these devices are implemented server-side, they also alleviate storage network congestion.

PCIe Flash storage plugs directly into the host’s motherboard and offers the lowest latency and highest IO performance potential of any storage peripheral. While they carry a higher cost per GB, they offer the best cost per storage IOPS and provide a total storage density comparable to SSD and conventional hard disk drives.

As such, for those virtualized applications with less demanding performance requirements, drive form factor SSDs that install directly into the drive bays of a host, can be a quick and more cost effective means to sidestep the latency issues often encountered between storage arrays and the networks servicing them. While not providing the same performance attributes as PCIe Flash drives, SATA based SSDs are orders of magnitude faster than spinning disk and more SSD drives can be populated into a server than their Flash memory card counterparts. These SSDs are also now available as a standard option with a new server purchase.

Mixing the Cache

Consequently, deploying PCIe Flash drives or SATA based SSDs into virtualized server environments to service different IO workloads can be an effective means of distributing cache storage resources. For example, operating system images can be placed on SSD to enable the rapid boot times of VMs while RAM paging for virtual memory can be placed on a PCIe SSD since these are direct memory-to-memory transfers.

While the tactical placement of fast storage resources into the hypervisor server is a step in the right direction towards alleviating VM IO constraints, it is not the final solution. Statically placing hot files into a cache resource (whether it is PCIe Flash or SSD) is often like trying to hit a moving target. Furthermore, busy storage and application administrators may not always be available to analyze data IO trends to make “on the spot” decisions about what hot database tables or which active application files deserve to be loaded into cache.

Automated Data Placement

For these reasons, many virtual server and storage administrators are opting to implement cache management software to help ensure application Quality of Service (QoS) and make the most efficient use of their investments in Flash and SSD. Caching software is typically installed on a physical or hypervisor host and will analyze the IO traffic going to disk to determine which data should be loaded into cache storage resources. The benefit of utilizing cache software is the automation of data placement into cache to ensure that performance will remain consistent even when IO data patterns change over time.

One has to realize that upon initial installation of caching software, it can take several hours to several days for the cache to “warm up” with the appropriate application data sets. This could potentially present a challenge in virtual environments where critical applications are IO bound and stuck waiting for performance to resume to more acceptable levels.

Moreover, for environments leveraging vMotion to dynamically re-assign application workloads to VMs on different hypervisor hosts, the process of de-staging cache to disk (cooling the cache) and re-warming the cache resource on a new hypervisor host, could degenerate into a vicious cycle of wildly inconsistent application performance levels.

Getting Cache Hot

Caching technologies like Intel’s Cache Acceleration Software (CAS) offering are designed to specifically address the dynamic storage IO and changing infrastructure requirements of virtualized server computing environments. The Intel CAS software is installed on a physical or hypervisor host and operates completely transparently to the end user or the underlying application system.

While the CAS technology automates the movement of hot data sets into the appropriate cache storage resource, it also enables administrators to “pin data” into cache at any point. This is a key capability for overcoming the issues described above when pre-warming the cache. Rather than waiting for the CAS to analyze and discover which data sets to load into cache, application and storage administrators can immediately place active database tables and files into cache to provide an immediate boost to performance. It also insures that for a mission critical data set there is never a cache miss.

Keeping Cache Hot

Perhaps more importantly, through its integration with VMware’s vMotion technology, solutions like Intel’s CAS are able to maintain a hot cache even when VMs are migrated across hypervisor machines. This is a key feature for truly enabling performance automation within virtualized data centers and cloud infrastructure environments that are subject to constant change. Additionally, these capabilities enable business and application owners get a return on their cache storage investments and effectively respond to changing conditions in the data center.

Intel’s CAS technology are paired with an Intel PCIe Flash card or SATA based SSD offerings to create an effective integrated caching hardware and software solution to address the IO constraints of the virtualized data center. The Intel® SSD 910 series PCI Flash card, for example, can achieve performance speeds of 180k random reads and up to 75k random writes on an 800GB memory module. This resource would be ideally suited for enterprise applications that require the highest performance & lowest latency possible, including virtualized applications and hot database tables/logs.

Similarly, the Intel® SSD DC S3700 Series SATA drive form factor provides 100, 200, 400 and 800GB of storage density and can deliver up to 75k random reads and 36k random write IOPs. The lion’s share of hot, active user data, files, etc., could be placed on this SSD drive to alleviate IO bottlenecks. Both of these drives are part of Intel’s data center product family and include the ability to sustain up to 10 full drive writes per day over a 5 year time frame, making them well suited for caching and high write environments.

Multi-level Caching

To further buttress performance, the Intel® CAS can integrate multi-level caching utilizing server based Dynamic Random Access Memory (DRAM) to provide bus transfer speeds for the most demanding data sets. Essentially, as data is loaded into a cache resource, like the Intel® SSD 910 Series PCIe drive or the Intel® SSD DC S3700 Series drive, the CAS will analyze the frequency upon which data sets are accessed and promote the most active data sets “up the food chain” to the fastest available cache resource.

In turn, as data becomes “cooler”, it will get de-staged to progressively less performance oriented storage resources. This process helps negotiate the best repository for data based on its relative level of importance at any point in time–improving storage efficiency and helping to ensure that cache resources are not under or over provisioned.

Intel’s CAS technology can take advantage of cache memory resources within a NAS or SAN storage array to further deepen the pool of available solid state storage across the data center. As a natural consequence of establishing an automated multi-tiered caching environment, existing NAS and/or SAN storage arrays will be freed up from handling heavy IO traffic. This will extend the life of these assets to serve as capacity, rather than performance oriented storage resources.

Storage Swiss Take

Virtualized server infrastructure is placing heavy IO demands on legacy storage infrastructure platforms. To head off business application performance issues, many data center managers are trying to “band-aid” their environments by tactically installing high speed caching devices and SSD drives into their hypervisor server and existing storage arrays. Data centers can no longer just throw hardware at performance problems and hope that the issues will go away.

Instead, a layer of software intelligence is needed to manage the movement of data across multiple tiers of high speed system cache memory, solid state and conventional disk storage to effectively meet the ever changing IO requirements of the modern data center. Intel’s CAS technology combined with their Flash and SSD product offerings are an optimal way to meet these challenges head on while enabling infrastructure managers to get extended life out of their existing storage assets and maintain consistent virtual machine application QoS.

Intel is a client of Storage Switzerland

As a 22 year IT veteran, Colm has worked in a variety of capacities ranging from technical support of critical OLTP environments to consultative sales and marketing for system integrators and manufacturers. His focus in the enterprise storage, backup and disaster recovery solutions space extends from mainframe and distributed computing environments across a wide range of industries.

Tagged with: , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,229 other followers

Blog Stats
  • 1,541,664 views
%d bloggers like this: