My recent article stating that the current generation of All-Flash Arrays should last for 10 years has created a bit of stir across the storage community. Chris Evans over at Architecting IT did the best job of arguing against a 10 year flash array but let me clarify my stance. I am not saying that technology, especially flash technology, won’t advance over the next 10 years. Clearly it will. What is changing is the reason to upgrade and this may drive these news systems to a much longer service life.
The classic motivation to upgrade a storage system is either you need more performance or more capacity. But All-Flash Arrays, thanks to their high performance and efficient capacity use, may meet those needs for a much longer period than we are accustomed to getting from traditional storage arrays. To be certain, there will be plenty of data centers that will need progressively more performance and capacity as the years unfold. There will also be at least as many data centers where the performance and capacity capabilities of today’s All-Flash Array is so far ahead of what their application environment requires, that they will not need to upgrade (at least for these reasons). In effect, these data centers will grow into the capabilities of their All-Flash Arrays instead of out growing them.
As a result, there will have to be a different motivation to upgrade. For example, Chris point’s out faster network capabilities as a rationale for upgrading. But that is an infrastructure decision, not necessarily a storage system decision. It also emphasizes my assertion that All-Flash Array vendors should make their systems so these type of components, like network interfaces and the storage software, can be easily upgraded instead of having to replace the entire system.
On the storage software front this is already the case. I have had several vendors tell me that their newest storage software can run on their oldest storage hardware. And we just wrote a briefing note on a storage vendor (Dot Hill) whose new system allows for storage interface interchangeability now and into the future.
In my opinion, power efficiency and data center footprint improvements may be the most legitimate reason for an upgrade going forward. As I stated above, software and interfaces should be easily upgradeable. Reducing power consumption and data center footprint would probably mean a complete chassis redesign. These two efficiency needs will be especially important in the power and space constrained metropolis, but the ROI of switching has to make sense. If the ROI of switching to a new system does not cover the cost of that switch and, again, if you don’t need the added performance and capacity of a new system, why upgrade? It is kind of like going into debt so you can get a hybrid or electric car and getting rid of your “paid for” gas guzzler. You’d have to do an analysis to see how long it would take for your gas savings to cover the cost of getting the new car.
To me there’s no question that the pace at which a storage system will need to be upgraded will be much slower than in earlier years. Maybe it is not 10 years, but seven is not out of the question and five is a near certainty. The three-year upgrade cycle could definitely become a relic of the past.
This can be good news and bad news for vendors. For new vendors it should be good since they can entice new customers by calculating an ROI based on a longer life expectancy and therefore cost justify a more expensive All-Flash Array as a result. For legacy vendors, this could be bad news because they count on a three-year replacement cycle. For them it will take a change in the business model, as well as focusing on other reasons to upgrade.