How to Implement a Risk-free, Network-based Flash Caching Solution

Flash-based data caching is a popular choice for cost effectively improving application performance in many different environments, but where that cache is located is a central part of any decision to use flash caching. Host-based caching (also called “server-side” caching) can offer the lowest latency but essentially limits that cache to a single server. Array-based caching can support multiple server hosts and HA configurations, but it isn’t easily shared with hosts connected to other arrays on the SAN and it can lock users into one supplier.

For larger, multi-array environments network-based caching can support any array on the SAN but this technology has historically presented some challenges as well. In order to provide the lowest latency, caching appliances are logically connected between the switch and the host or storage arrays. These ‘in-line’ implementations often mean a complex re-configuration of switches, hosts or storage systems, and can bring the risks associated with inserting anything into the production data path. They can also cause a re-architecting of applications so that they point to the cache storage area instead of the original data set. A new solution from Cirrus Data is designed to resolve these challenges and make SAN-based caching more viable for the enterprise.

Plug and play

Cirrus’s Data Caching Server (DCS) is a caching appliance that connects directly into a fibre channel (FC) SAN, between switches and production storage systems, providing tens of terabytes of flash capacity to accelerate applications for all servers in the environment. Designed for truly plug-and-play implementation the DCS is logically transparent to hosts on the SAN, enabling it to be deployed into a live environment without downtime.

Using Cirrus’s Transparent Data Intercept (TDI) technology, the DCS requires no reconfiguration of FC switches, hosts or storage. Upon implementation the unit auto-discovers each path through the switch and inserts them one at a time through the DCS. The result is a shared caching appliance that goes in with a minimum of effort and can be removed just as easily, eliminating the risks typically associated with an in-line solution.

Open hardware architecture

The current DCS appliance leverages an off-the-shelf Dell server running Linux making it easy for VARs to integrate this product into a comprehensive solution. Its open hardware architecture allows the DCS to ‘ride the OEM upgrade curve’, taking advantage of improvements in hardware without any additional integration work by the manufacturer or reseller. This non-proprietary design also makes it possible to implement the DCS on other server vendors’ hardware, helping to keep costs down.

Typically deployed in HA pairs, each DCS appliance can support up to 3 x 3.2TB PCIe flash cards (currently LSI Nytro WarpDrive), accelerating up to eight data paths and providing total performance of 3GB/s and 2M IOPS. An external chassis with up to eight more cards can also be added to the configuration creating a caching capacity of almost 40TB
.

Storage Swiss Take

The ‘secret sauce’ for this product is its ability to automatically and transparently reroute FC data paths through the device. In a particularly compelling demo recently, Cirrus’s CEO systematically unplugged working FC connections to the storage array and attached them to the DCS. He then reconnected each back into the array, successfully inserting the caching appliance in-line between the switch and the storage system. In roughly 5 seconds each, data paths were automatically reestablished between the host and the storage system, transparently, through the DCS. The only “impact” we saw was a positive one, that of IOPS improving dramatically on the test suite. This demo was conducted without any changes to the host NIC, the switch or the storage system.

Network caching seems like the ideal solution for the large enterprise with multiple arrays that could benefit from acceleration. But these solutions have had difficulty gaining traction; the primary reason, we believe being the cost and complexity they’ve carried with them. Many of these solutions have used expensive custom hardware or required a disruptive implementation process that was deemed too risky for production environments. But Cirrus seems to have addressed these concerns by leveraging off-the-shelf hardware and plug and play technology to create a network caching appliance that may meet the need for speed in the enterprise environment.

Cirrus Data is not a client of Storage Switzerland

Click Here To Sign Up For Our Newsletter

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , ,
Posted in Briefing Note
One comment on “How to Implement a Risk-free, Network-based Flash Caching Solution
  1. […] things can even get more complicated. Proper partition alignment is critical to performance when implementing flash in your […]

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,955 other followers

Blog Stats
  • 1,347,579 views
%d bloggers like this: