Does All-Flash Kill Data Management?

Most of today’s modern storage systems have a scale-out design, meaning they can expand to meet almost all of the data center’s capacity requirements. Also, since an increasing number of these systems are all-flash, they can meet most data center’s performance requirements. The combination of nearly unlimited scaling and unlimited performance is leading some all-flash vendors to declare the need for data management dead.

The primary two functions of a data management process are capacity management (data tiering) and performance management (right data on the right media at the right time). While a scale-out all-flash array does not eliminate the need for data management, it may make some IT professionals think that it is no longer worth the effort. In fairness, it does take time and effort to manage data correctly. For something to be worth the effort, there has to be enough potential payback for it to be worth that effort. Data management solutions can increase their appeal by reducing complexity, which lowers the effort by expanding the payback, so it is even more worth the effort, or, preferably, by doing both.

A robust data management policy reduces investments in storage and backup infrastructure investment while providing better data preservation. For all-flash arrays to make the data management effort “not worth it,” they have to be both cost-effective and be able to protect and preserve data better than the data management solution. The ever-decreasing cost of all-flash arrays does make them cost-effective, but few vendors have done anything to improve preservation.

The first aspect to examine is cost. While most all-flash vendors claim price parity with disk-based arrays, these comparisons are against high-performance hard disk-based systems. An environment governed by a data management process will typically use a much smaller high performance flash array to store only the most current data. Then it leverages a moderately performing, high capacity, storage system to store inactive data. This secondary storage target can be any cost-effective storage solution like a hard disk array, a high-density flash array, object storage, tape storage, or the cloud. The result should be a less expensive overall storage investment and greater flexibility in storage vendor selection.

A challenge for a data management process is classifying data to make sure it is on the right type of storage at the right time. Today, manually identifying and moving data between storage types is impractical both from a time required and an accuracy standpoint. Automation is a must, and there are several automated data movement solutions now on the market. But the data center needs more. A next-generation data management solution should do more than make data placement decisions based on age or last access. Instead, it should predict and preposition data based on its analysis of data access patterns.

A data management solution, unlike an archive, moves data in two directions. An archive solution’s goal is to transfer data to the slowest least expensive tier of storage, but a data management system should make sure data is on the right tier of storage at the right time by balancing performance demands with potential cost savings. Additionally, a data management tool should determine how the data should be managed based on the data itself.

The key is for this identification, classification, and movement process to be as seamless as possible to the environment. Ideally, IT creates a series of data management policies and the data management solution positions, copies, and preserves data by following those policies. Further, the data management solution can add additional value by including capabilities that increase its value. Search is an excellent example since the system is already identifying and classifying data, so creating a searchable metadata index seems like an obvious next step.

StorageSwiss Take

A single-tier to store all data regardless of type seems like a simple solution. But the types of data organizations store today are too varied and their requirements too unique for a single system to deliver – at least cost-effectively. Data management vendors, however, need to simplify the data management process that moves data between various types of storage based on policy, while at the same time adding long sought after capabilities like search.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,838 views
%d bloggers like this: