10 Reasons that Flash Belongs in the Server

Watch the On Demand Webinar “New Use Cases For Server Side Flash”

Maintaining storage performance is something IT has to do and there are a lot of ways to do it. Some are easy, some are complicated, some are affordable, some not so much. But just like most decisions that IT makes there isn’t a ‘one best way’ to solve performance problems. Flash is usually part of that solution but there are many ways to implement these high performance storage devices.

IT needs options. One of those is using caching software that leverages flash in the application server. In fact, server-side flash caching is one of the most affordable, least disruptive and most impactful solutions available. Here are ten reasons why it should be considered first.

Easy to deploy

Server-side flash only impacts one server, not the entire SAN and caching doesn’t require any data migration, data is moved into the cache automatically.


Buying a single SSD for an application server requires much less flash capacity than upgrading a shared storage array.


Some server-side flash caching solutions can leverage several form-factors from many different vendors, meaning no vendor lock-in and a lower cost per GB.

Local storage performance

Using PCIe, SATA or SAS SSDs puts this high-performance storage area closest to the application and the CPU that’s ultimately going to use it. There’s no network latency like there is with a SAN.

PCIe and memory bus performance

Besides taking network latency out of the equation, server-side flash can leverage the speed of the PCIe bus and even the memory bus for the lowest possible storage latency.

Easier to cost justify

Buying flash for each server is an incremental purchase that can be small enough to be made by the server or application team, not a system-level upgrade that requires capital budgeting.

Lower overhead

Server-side caching is an automated process, one that typically requires less administration than tiering solutions and doesn’t add to SAN management overhead.


Caching doesn’t require the redundancy of tiering and as a result, typically requires less flash capacity, making it a more efficient solution.

Improved SAN performance

Server-side flash caching reduces the load on shared storage. Some applications see as much as a 90% reduction in SAN traffic, which means that much more resources for other servers and applications.

Improved VM density

The added performance of local flash can immediately boost the number of VMs that each host can support.

For more information watch this on-demand webinar in which experts from Storage Switzerland and SanDisk discuss new use cases for server-side flash. (Shared Flash, Server Side Flash or Server Side Caching).

Click Here To Watch On Demand

Click Here To Watch On Demand

SanDisk is a client of Storage Switzerland

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , ,
Posted in Blog
One comment on “10 Reasons that Flash Belongs in the Server

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,542 other subscribers
Blog Stats
%d bloggers like this: