In our webinar, “Overcoming the RoadBlocks to the All-Flash Data Center”, one of the questions that came up is how to integrate an All-Flash Array into the data center. It’s not our position that you should throw out all your existing hard drive based arrays. The role of hard drives will change as they’re augmented, not replaced, by flash in the data center. This is similar to how the role of tape changed when disk was inserted into the backup process.
Step One – Selecting the All-Flash Workloads
Unlike other storage refresh cycles most All-Flash Arrays are not being bought with the expectation that all the data will be migrated off the old array. You should be picky about what goes onto this array but, unlike hybrid approaches in the past, you don’t need to be as picky. You should target just about every database application in the environment and many of your virtual workloads. A simple rule of thumb is any application that’s running on a 15K RPM HDD array should be moved to the All-Flash Array first. Then, applications and data sets that are running on slower HDD tiers should be looked at.
Step Two – Migrating the Data
The second challenge is how to move the data between the two devices. There are several tools that can assist with this migration and a few can do it seamlessly. Many of the storage virtualization software solutions can be leveraged just for data movement as well. The real answer though, may be found in the hypervisors that are driving virtualization. VMware provides Storage vMotion that allows live migration of a storage volume from one storage system to another, even if they’re from different manufacturers.
Step Three – Analyze the Workloads…or Not
The final step is to analyze those workloads as they are running on the All-Flash Array. You can use the built in operating system and hypervisor tools or a third party product like we discussed in the article, “How Do I Know My Virtual Environment is Ready For SSD”. The key difference in an All-Flash world vs a Hybrid world is how often that analysis needs to be done.
With a Hybrid Array you’re trying to leverage a very small amount of flash to accelerate a much larger data set. As a result, this analysis has to be done continuously by tiering and caching software, usually in the background.
In the All-Flash world the pressure to continuously monitor and manage is almost eliminated, everything is fast all the time. If an application or data set can’t take full advantage of the performance afforded by flash storage, it can take advantage of the simplicity these systems provide. The only limitation is capacity consumption. If you run out of capacity you might need to evaluate the workloads running on the All-Flash Array to make sure they’re the most “flash worthy”. The other option is to simply expand your flash capacity. One of the points that came up on yesterday’s webinar is that Nimbus, our sponsor, is seeing 70%+ of their business come from repeat customers. This means that basically customers are choosing to increase their investments in flash instead of spending more time struggling with performance ‘optimization’.
One of the best parts of our webinars are the question and answer sessions. In fact, we have shortened the presentation section to allow for expanded Q&A. While many people don’t ask questions, almost every attendee stays around for it. Our webinar on “Overcoming the RoadBlocks To The All-Flash Data Center” was no exception. While that webinar is available on demand we decided to do a second live event from scratch so you could ask your questions or at least hear new ones. So if you haven’t signed up, I encourage you to do so and get your All-Flash questions answered today: