A cache in a manufacturing environment is an intermediate store of components or partially assembled products, often referred to as “in-process inventory”, that serves to make the overall production process more efficient. In a computer system, caches, also called “buffers”, similarly decouple various components in the data path from each other maximizing the throughput of the system as a whole. Like a factory’s in-process inventory, buffers hold data and smooth the transition between components (steps in the ‘data production line’) that run at different speeds.
Storage systems have caches too, most notably input or write caches, which hold data that comes in from external systems such as a host server, decoupling the storage from the compute engine. In this way a storage cache can optimize the speed of a storage system, which typically operates much slower than the memory-based storage used by the application programs that are driving its data interactions. Storage systems also have caches on the read process which improve efficiency as well, but this time by eliminating redundant disk accesses when an application requests the same data multiple times.
Server-side Storage Caching
A storage cache can be implemented within the storage system, or in the server itself. Each method has its benefits but server-side storage caching offers the advantage of locating the cache closest to the CPU that’s doing the work, bringing low latency storage access right where it’s needed. Putting cache capacity into the server can also be a more targeted approach and may provide a simpler performance solution since it eliminates the complexity of the network and storage system.
Flash Storage Caching
Cache buffers that run in system memory and support the compute process are usually DRAM, but NAND flash can be a more cost effective and more reliable alternative for storage caching. In general, flash’s lower per-GB cost makes larger storage caches more feasible than with DRAM, and larger caches mean increased cache hit rates, even for very large workloads of multiple terabytes. Flash, being non-volatile, doesn’t need to be ‘repopulated’ from disk after a server failure, enabling faster recovery. While there are some non-volatile DRAM products they’re even more expensive and/or physically larger that standard DRAM.
Write-around Caching
Write-around caching essentially speeds up the read process by servicing data requests from cache (a ‘hit’), eliminating the latency that would be incurred in retrieving data from the back-end disk array. Maintaining the data most likely to be accessed in the cache area requires a continuous evaluation to determine which data is the ‘hottest’ and then a process to move that data into the cache area and evict ‘cold’ data. This type of ‘read-only’ cache can be relatively inefficient, as data has to be first written to disk, then go through the evaluation process before it’s promoted to the cache.
Write-through Caching
In write-through mode the caching software writes data to the flash cache and simultaneously writes that data “through” to the storage behind it (disk drives), at which time the write operation is acknowledged back to the application. Although called write-through it’s essentially a type of read caching since it pre-populates the cache with the most likely ‘next read’ candidates, improving read performance by eliminating the data warming period that’s required in the write-around method. Write-though also ensures the storage area is 100% in sync with the cache and its data is always available to other servers sharing that primary disk storage.
In write-through caching, each write operation is acknowledged only after data is written to disk, so there’s no increased risk of a write transaction being lost if the cache fails or if the server lost power suddenly. However, write-through caching does not directly increase the speed at which data is written. Moving read commands off storage systems and RAID controllers and onto the cache has been known to “make room” for write operations, providing some indirect improvement in an application’s overall write performance. However, write operations are still fundamentally performed at the speed of the disk system, which may result in little net benefit for write-intensive applications.
Write-back Caching
Write caching provides a buffer to decouple the slower back-end disk storage area from the front-end applications. This means write caching can have a very significant impact on storage performance, depending on how write-intensive the workload is.
“Write-back” means the caching software writes the data first to the cache and then acknowledges that write to the application, before anything has been written to the disk storage area. Periodically, those writes are ‘written back’ to the disk area after important optimization processes have been conducted, such as write cancellation and write coalescing. This allows the application to proceed in the shortest possible time, decoupling the slower disk storage from the compute process and improving write-back usage efficiency. However, write caching can present the risk of data loss if the cache fails before its data can be written to the back-end storage area.
Write-back caching is typically used for supporting write-intensive workloads that benefit the most from faster write operations, such as business analytics and real-time transactions that drive revenue. These use cases need to support the extra hardware and software (and the cost) required to make sure the cache won’t fail prior to data being flushed to disk. This includes the use of mirrored caches, super-caps and battery-backed circuits to keep data ‘alive’ in the event of a power loss.
Improving Storage Caching Designs
Ideally, flash caching software should be written to support NAND flash technology, not adapted from existing DRAM designs. NAND flash has some intrinsic characteristics that are much different than DRAM and need to be supported. For example, flash can only accept a finite number of writes (called “program/erase” or “P/E” cycles) before taxing the error correction capabilities of the device and eventually becoming unreliable.
While the total number of P/E cycles that flash devices can handle is significant, improving flash endurance is still a primary objective of enterprise flash storage devices, and more specifically, the controllers on these devices. Caching software, since it controls write activity to the flash area, can greatly impact the situation.
Coalesced Writes
One method that companies like SanDisk are offering, with their FlashSoft caching software, is to coalesce I/O transactions by queuing write commands before data is written into the cache buffer. Since most changes occur to the same data in a relatively short time this allows them to consolidate (write cancellation) high-turnover activity and generate fewer, sequential write operations. In write-back mode, coalescing can reduce write operations on the back end by up to 20x, optimizing the process of flushing data from the cache buffer to the storage system. It also improves back-end write performance since only the latest of the changes are actually flushed to disk, removing some of the randomness of the I/O pattern. Finally, in both write-through and write-back modes coalescing writes should result in minimized P/E cycles to the flash area, improving flash endurance.
Caching in the Hypervisor
For virtualization environments caching designed to run at the host-level, like SanDisk’s FlashSoft, can provide advantages over technologies that rely on an agent running in the Guest OS. Host-level caching allows the hypervisor to dynamically allocate cache space to VMs as needed, improving flash utilization. There should also be less administrative overhead compared to managing caching at the VM level, and hypervisor-level caching doesn’t create the potential security concerns associated with agents running in each VM.
Conclusion
Storage caching is a technology solution to the fundamental shortcomings of disk-based storage systems and storage networks. Using NAND flash and locating this cache on the server can result in significant performance gains to critical software applications. However, choosing the correct type of storage cache, especially with respect to the way writes are handled, is important to its effectiveness. This is the case with both virtualized server and desktop environments that have significant write workloads.
From a design perspective, caching software that uses NAND flash should be written for its unique characteristics and should have the intelligence to reduce write activity to the flash substrate. In addition, for virtual environments, server-side flash caching software should be able to run at the hypervisor layer.
SanDisk is a client of Storage Switzerland
Related articles
- Application Defined Cache Acceleration (storageswiss.com)
- How to Reduce Server-side Cache Risk (storageswiss.com)
- What To Look For In All-Flash Deduplication (storageswiss.com)
[…] we detail in our article “What is Storage Caching”, caching comes in three basic forms. Write-around caching, also known as “read-only” […]
[…] Switzerland detailed fundamental caching technology in the recent article “What is Storage Caching?“. Essentially, it’s the automated movement of frequently accessed data from a slower […]