Despite the headlines, most data centers have not converted to all-flash arrays just yet. While they may have a few workloads that can benefit from flash performance, most of their workloads are performing just fine on a traditional hard disk-based array, and in many cases, those arrays are still under warranty and not ready for replacement. How can organizations get flash to the workloads that need it to get the most out of their flash investment?
Maybe the obvious solution is to buy a small flash array and just move the workloads that need flash performance over to it. After all, most all-flash vendors have an option in their portfolio that lets customers start small and expand as the number of workloads increase in size.
The problem with this approach is those workloads need to be identified, data needs to be moved and the application/users configurations updated. It also doesn’t establish a way to identify the next candidate for movement to the flash array, nor does it establish a method to move data off of the flash array when a particular application no longer needs it. Basically IT waits for a user to raise their hand before moving the application or data set, and then that data stays put until it’s time for another hardware upgrade. Not exactly scientific.
Establish Flow Before Flash
A better path is to establish a data management strategy before implementing a flash array. This strategy should center around a data management solution that can analyze existing data assets, identify which workload would truly benefit from flash performance and then move only that data. This data movement can be done seamlessly by managing the metadata centrally. All applications and users would access data through the central metadata repository, meaning that as data moves to and from the flash array, no updates of the applications are needed.
The ability to pull data back from flash may mean the small initial flash array investment actually becomes the only flash investment. This is because in most organizations, data is active for less than 90 days and then it becomes a permanent member of the inactive data set. In fact, a data management solution could also eliminate the need for growth in the legacy hard disk array, as data could be migrated seamlessly to a long term repository in the cloud.
All-flash arrays solve a lot of problems, but most organizations only need a small part of their data set residing on them. The decision to go all-in on flash is because IT doesn’t have a way to seamlessly separate, categorize and organize their data. With a data management solution that also manages metadata, organizations can reduce the cost of their flash investment while opening up opportunities for object storage and cloud storage.
To learn more about how data management solutions can eliminate headaches and improve storage resource utilization check out our on demand webinar, “Why Data Migration Hurts And How To Stop the Pain.” (Or “How To Stop Data Migration Pain”)