The ROI of Server-Side Caching

Implementing server-side caching with the right solid state disk (SSD) can be like conducting a ‘surgical strike’ on storage performance problems. Installing this combination of hardware and software can eliminate the storage roadblock to increased transactions per second, while not requiring changes to the rest of the storage infrastructure. But this performance precision comes at a price.

Can installing server side caching in anything more than a couple of servers be cost effective? Conventional wisdom says that shared storage should generate a better ROI as the number of servers included in a performance improvement initiative increases. Is conventional wisdom wrong?

The ROI of SSD Choice

The first step in establishing a solid return on investment for server-side caching is to use the right type and right amount of flash for the task at hand. By coupling caching capabilities to specific devices, some vendors force users into using PCIe based solid state devices (SSD) for the cache in their servers. While this form of SSD is the highest performing thanks to its direct access to the CPU, it is also the most expensive.

Because storage I/O latency is only one factor that can constrain application performance, many, probably most, applications can’t take full advantage of PCIe-SSD performance, so less expensive drive form-factor SSD may be all that the application needs. This is especially true considering drive form-factor SSDs are narrowing the performance gap with PCIe SSDs as they move to 6Gb and 12Gbps SAS connectivity.

Also, when appropriate, the caching software should allow for multiple SSD types to be used within the same server. For example, a small PCIe SSD may be the right choice when maximum performance is needed, while a high capacity drive form-factor SSD is used for everything else. Terabytes of SSD cache per server are now very affordable, meaning an application’s entire data set can now be stored in cache, eliminating cache misses.

The ROI of Cache Choice

The caching software also needs to provide a choice. First, it should come in a variety of forms that support virtualized environments like VMware but also bare metal environments like Windows and Linux. While virtualization is commonplace across data centers it is rarely ubiquitous within the data center. Often those mission-critical applications that are running on bare metal servers are already consuming too much server and storage resources. But they desperately need an improvement in storage performance.

The second kind of choice that caching software should provide is the ability to select the type of caching that’s used. Many vendors, especially those offering server side caching solutions, force you to use read-only or write-through caching exclusively, which only accelerates read operations.

Many environments have applications with heavy or important write loads, triggered by transaction logs and indexes. For these environments it is important to be able to safely accelerate write operations in addition to reads.

The ROI of Shared Storage Life Extension

The performance bottleneck that most environments are facing is caused by their shared storage systems. The traditional solution to a performance problem is to upgrade or replace that system with a newer, faster one. Unfortunately, aside from performance, many storage systems aren’t ready for an upgrade. In these situations, they usually have room for capacity expansion and are providing the reliability and data services (RAID, snapshots) that the organization needs.

Implementing server side caching, as a means to postpone a storage system upgrade or replacement, is the key to realizing a very high ROI from the caching solution (and from the storage system itself). By extending its usable life the company can postpone the time and effort to research, evaluate and implement a new system, as well as the obvious outlay for the upgrade itself. And since performance is the motivation for the new system, it will likely include some SSD in its configuration, so its cost will be at a premium.

If server-side caching is selected as an alternative to system replacement or upgrade, then almost all of these associated costs can be eliminated. The existing storage system and the time investment in learning its operations are preserved. But there may be another benefit to using server-side flash, it could do a better job of application performance enhancement than a storage system upgrade. Since the most active I/O is now occurring on the physical server that needs the performance boost, application response time is minimized. In addition, this can also lead to a reduction in storage tuning and less admin time spent on that task.

This application response component of the server-side caching ROI gets better the more widely it is deployed. As described above, by leveraging different types of flash devices even within the same server, large caches can be cost effectively configured so that cache misses are a rarity and writes are coalesced and written contiguously to the shared storage system.

The ROI of server-side caching is maximized when the shared storage system’s fundamental role changes to providing capacity and data services (RAID, snapshots, replication) instead of simply improving application performance. If the performance responsibility can be removed from the shared storage system then its useful life can be extended well beyond what is considered the norm. It can also be configured to minimize cost per GB, an attribute at which HDDs excel. While SSDs compare favorably to HDDs on a cost-per-IOP basis they do not on a cost per GB basis. This automatic blending of the two is ideal.

The ROI of Storage Network Life Extension

Another significant ROI advantage of server-side caching is the avoidance of a network upgrade, which almost always goes hand in hand with a shared storage system upgrade. After all, storage performance improvement can’t be realized if the ‘pipes’ aren’t also upgraded. This would be like buying a sports car to race on a dirt road.

The cost of a network upgrade can be significant. These costs would include newer, faster host bus adaptors (HBAs) on the servers, the network switches to support this faster bandwidth and the cost of a premium HBA option on the upgraded storage system itself. Once again there is the cost to implement and potentially learn a new set of tools. Network upgrades are disruptive by their nature. There is also a downtime cost associated with installing the new HBA and the associated drivers.

Even after the storage network is implemented and understood, there is the potential for a continuous cycle of fine tuning the network so that performance stays optimized.

While server side caching does not eliminate the need for the storage network it does off-load much of the traffic that can be causing a problem. Thanks to caching, repeated read operations don’t even traverse the network, in most environments that means a 60 to 80% reduction in traffic. These re-reads are being served from inside the server that needs them, maximizing performance.

Even in a read-caching-only environment this means that the bulk of storage network bandwidth is dedicated to write traffic, which also improves write performance. But when write caching can also be safely implemented, as mentioned above, then writes can be coalesced and sent across the network in a sequential manner, which is better for both networks and storage systems.

Finally, network tuning is all but eliminated. Thanks to server-side caching, performance sensitive I/O transactions are now occurring inside the server with a large amount of those transactions (subsequent reads) never leaving the server. The network, like the storage system, becomes focused on capacity and data services.


Server-side caching often appears to be an inexpensive quick fix for one or two performance demanding applications. In reality its ROI becomes even better the more widely it is deployed. While storage network and storage system upgrades are eventualities, server-side SSDs with the right caching software, can significantly postpone that event.

Click Here To Watch the On Demand Webinar:

“The Five Myths of Server-side Caches – Separating Fact from Fiction”

SanDisk is a client of Storage Switzerland

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,514 other subscribers
Blog Stats
%d bloggers like this: