Which Is The Right Way To Implement Flash Storage?

During a recent Storage Swiss webinar, we polled our audience about how they were currently using caching technology in their environment. A significant percentage of respondents replied that they were using caching directly on their server infrastructure.


This is not surprising given the fact that server-side caching offerings are generally a less disruptive approach for accelerating applications and when deployed selectively, can be more economical than implementing SSD arrays.

The question is does it make sense to use both? While deploying server-side caching is a highly targeted approach for enabling application acceleration, it does have some pitfalls. For example, while reads can be safely accelerated in a server-side cache, there are inherent risks with writing to a local server cache. The reason is, if the host’s flash or SSD device fails before newly written data is written back to a RAID protected hard disk storage area, there could be data loss. Consequently, most server-side cache implementations are only configured to accelerate reads.

All flash or hybrid disk arrays can accelerate reads and writes safely simply because they utilize RAID to ensure data protection. One of the other benefits to these solutions is that they accelerate performance on all applications. This removes an element of uncertainty for system planners that in today’s virtualized world, increasingly have to monitor and tune applications to ensure performance service levels are being met.

The downside to flash and hybrid arrays is that it requires a much larger up-front investment than just populating a few hosts with SSD devices. In addition, there is limited, fine grain control in terms of how application acceleration is prioritized on shared arrays since QoS is prioritized at the volume level rather than at the application level.

The bottom line is that both technologies can ultimately compliment each other. Server-side caching can be used to accelerate the most critical business systems where application storage I/O contention cannot be tolerated and quality-of-service is paramount. All flash and hybrid arrays can then be leveraged to manage the bulk of the workload for the other applications in the environment. Furthermore, when cache acceleration software is implemented that can utilize a “remote cache”, like SSD on a storage array, to safely accelerate write data, organizations can leverage both types of flash investments very effectively.

For more information on determining the best approach for your environment, please go to the following link for the on-demand version of the Storage Switzerland and Intel webcast “Can’t We Get Along? How Server-Side Caching Can Compliment Shared SSD Arrays”.

Widget Screenshot

As a 22 year IT veteran, Colm has worked in a variety of capacities ranging from technical support of critical OLTP environments to consultative sales and marketing for system integrators and manufacturers. His focus in the enterprise storage, backup and disaster recovery solutions space extends from mainframe and distributed computing environments across a wide range of industries.

Tagged with: , , , , ,
Posted in Blog
One comment on “Which Is The Right Way To Implement Flash Storage?
  1. Very good article Colm! Another very powerful way to leverage SAN-based or even server-side flash/SSD is through the use of SANsymphony-V (SSY-V) from DataCore. You can leverage ‘any’ block storage device from ‘any’ manufacturer and also provide synchronous mirroring for that storage at the SAN-level for additional protection against failures. SSY-V can also run in a VM, allowing you to leverage locally installed flash/SSD to provide even more acceleration closer to the application. You can even mirror flash/SSD devices on the server-side to give you additional protection.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,215 other followers

Blog Stats
%d bloggers like this: