Overcoming All-Flash Array Post Implementation Problems

Storage Switzerland suggested a few years ago that IT professionals look for ways to get as much as 10 years out of their flash arrays. The problem was, as we cited at that time, most storage hardware manufacturers counted on a three to four year upgrade cycle. The storage refresh cycle is now driven by vendor programs than an actual need to upgrade the technology. In addition the way storage capacity is calculated has changed, and some vendors are exploiting deduplication claims to an extreme, leading to customer confusion. An all-flash array is supposed to make life easy for IT, and while the technology did exactly that, vendor bureaucracy is doing its best to bring confusion back into the market.

Making 10 Year Flash a Reality

When we made the 10 year flash claim, our reasoning was that for most (not all) data centers, the purchase of the right all-flash array would not only address performance problems it eliminates them – for a long time. Also, most all-flash arrays come with either deduplication, compression or both, and since most (again, not all) data centers see about a 5:1 efficiency ratio, the expansion capability of a scale up all-flash array would more than likely meet their storage needs for years to come.

New all-flash vendors moved to address this new reality. Several created a new approach that allows customers to carry forward their investment by cost effectively upgrading storage controllers when they need to. Most importantly there was no artificial bump in ongoing maintenance after year three – a common practice among legacy storage vendors. Interestingly these legacy vendors are having to change their ways and provide more reasonable maintenance and upgrade paths.

10 Year Flash Creates a Future Problem

A flash system that lasts 10 years creates a second challenge. Flash storage continues to become more dense, meaning that more and more capacity is available in the same amount of space. Less than two years ago, a flash array delivered about 24TB of storage in a drive shelf, today it delivers 48TBs in that same space. When all-flash customers need additional storage, they don’t need to necessarily add another drive shelf, they need a way to consolidate data onto the newer higher capacity drives. Doing so not only increases capacity, it increases it at the same rate of power consumption, and, without consuming additional data center floor space. In effect the TBs per watt and TB per data center tile doubles.

The problem is that when consolidating drives in this manner, the customer is stuck with the old drive. Vendors need to create a program that will allow customers to trade in their old less dense flash drives and incrementally replace them with high density drives. They also need to provide technology to automatically move data to the new drives.

The Deduplicated Lies that Vendors Tell

Another challenge happens long before the all-flash array reaches its 10 year life expectancy: Correctly calculating how much storage the organization should buy. The cause for the capacity confusion is the very data efficiency solutions (deduplication, compression and thin provisioning) that all-flash arrays use to claim price parity with hard disk-based arrays. These data efficiency techniques cause issues because they are very much a “mileage will vary” calculation. We use 5:1 as a rule of thumb but every data center is unique and we’ve seen ranges from 2:1 to 9:1 – depending on the environment.

Capacity confusion is leading many storage vendors to make some outrageous data efficiency claims. But the customer still has to guess just how much capacity he/she needs. Some vendors are providing data reduction guarantees, but those are just shots in the dark. In most cases customers are massively over-buying capacity, which is fine except we all know that flash capacity will be less expensive next year.

Vendors need to start making deduplication assessments based on each individual customer’s actual data set. Basically access each customer’s data and provide a specific guarantee based on that data.

StorageSwiss Take

IT professionals tell me all the time that after implementing an all-flash array their day-to-day administration lives become easier. But as the array ages, which happens quickly in the fast paced flash market, confusion arises on how to upgrade and maintain the system. All-flash vendors need to take that burden off of IT professionals, allowing organizations to invest in all-flash with confidence that the solution that will not only solve its immediate problem but future challenges as well.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,343 views