IT organizations select NetApp storage because it excels at managing unstructured data. When NetApp first introduced the filer concept, the data it was intended to store was typically user created files from office productivity applications. The files were created, modified, finalized and rarely used again. Even large organizations had a finite number of users accessing the system at the same time and their connections were often much slower than the storage performance capabilities of the filer.
The use case for unstructured data is dramatically different today. Machines or devices create and store non-stop data now. And massively parallel compute clusters process this data as fast as they can access it. The storage performance capabilities of the filer need to improve substantially to keep pace, leading many companies to consider replacing their NetApp investments.
Filers have two responsibilities. The first is to store unstructured data and the second is to respond to I/O requests of users and applications for that data. Delivering on the first responsibility requires storage capacity, something filers do very well. The second responsibility requires storage performance, something filers don’t do very well, especially as they reach their capacity limits or as the access frequency grows.
Flash storage seems like the obvious answer and there are countless all-flash solutions that want to replace an organization’s NetApp system. Even NetApp wants to replace its aging filer-installed base with a bigger, faster flash-assisted NetApp system.
The problem is that since, in most cases, the filer’s useful life span is not over, replacing it with a new vendor means losing the investment from both a capital and operations perspective. Upgrading to a faster NetApp solution preserves the investment in learning the NetApp software, but only fixes the performance problem temporarily.
In either case the filer is often not exceeding capacity capabilities. Instead of replacing the NetApp filer or upgrading it, organizations should consider a more surgical solution and address the performance problem separately while continuing to leverage NetApp for storage capacity as well as NetApp’s rich data services features like snapshots and replication.
The challenge most storage systems face is the performance and capacity are glued together. Even though they are very different problems, administrators cannot solve them individually. The solution is to solve the problems separately by abstracting storage performance from storage capacity. The storage performance tier would reside on dedicated appliances, often called “edge filers”. These edge filers operate with streamlined software focusing primarily on storage I/O performance. These appliances are flash-based systems whose main job is to turn around the I/O requests of the most active data set. The legacy filers’ role would shift to storing and protecting data, which of course NetApp’s operating system, Data ONTAP, excels at.
Join Storage Switzerland and Avere Systems for our upcoming live webinar, “4 Ways to Improve NetApp Storage Performance Without Replacing It“. We will discuss the various NetApp performance problems and how you can solve them while keeping your investment in NetApp in place.