Overcoming the All-Flash Array Implementation Challenges

Eliminating “waits” allows customers, users, and applications to interact with the IT infrastructure more fluidly. As a result, IT professionals are focused on improving response times, and storage is getting much of the attention. The demand for performance is accelerating faster than ever and, as a result, data centers are potentially forced to upgrade their storage hardware sooner than their typical storage refresh schedule. Despite the un-budgeted cost, the ability of new all-flash and hybrid arrays to meet and exceed expectations is too tempting for IT planners to pass up. The problem is there is more to an AFA implementation than just the hard cost and those need to be considered before taking a forklift to a new storage infrastructure.

Problem # 1 – Purchasing an All-Flash Array

The first and most obvious problem of purchasing an AFA is paying for it. Most AFAs leverage MLC NAND, deduplication, and compression to lower the cost per GB of flash storage. These techniques have proven to be effective, resulting in a price of less than $5 per GB. While these price points certainly make all-flash arrays interesting, they still are not less expensive than the hard drive or hybrid based alternatives, especially if those alternatives are already in use. Therein lies the challenge for an AFA, ultimately it is a net new purchase and has to come as part of a storage refresh cycle or budget dollars must be found to cover a purchase in advance of the normal refresh cycle.

Problem # 2 – Migrating Data to the All-Flash Array

Another cost to consider is the cost of migrating data to the AFA. While most AFA vendors have excellent track records of implementing their AFA in less than a day, it does take the time to migrate applications to that array. The applications that can justify having their data placed on an AFA are often critical to the business and anything more than a few moments of downtime is typically unacceptable.

Migrating data to any new storage system requires planning, typically a replication tool that can migrate the data while the application is running and then shift the application when all the data has been copied over. IT, typically, does not budget for buying these tools when purchasing the array.

Problem # 3 – Learning How to Use an All-Flash Array

While an AFA is not any more or less difficult to learn how to use than any other new array, it does require time to learn the new software. Again, most AFAs are from new vendors and are operated differently than the existing storage hardware that the organization already owns. Additionally, each of the features that the organization plans on using needs to be tested to make sure it is reliable. Any automation work like scripting will also need to be re-done, which again takes time.

Problem # 4 – Integrating the All-Flash Array into the Data Protection Process

Integrating an AFA into the environment also means integrating it into the data protection process. Again, the AFA is likely to host mission critical performance demanding applications. Those applications require special protection. Also, the AFA will likely enable the application to scale further, supporting more users and transactions, making it all the more critical.

An AFA is also likely to lower the number of failure domains. In an attempt to better rationalize the expense, and thanks to its performance capabilities, the IT professional will stack a high number of workloads on the system, increasing the scope of impact if that system fails. Again, AFAs are new products from new vendors with new software and have not been through the vetting process that legacy solutions have.

Finally, the performance of the data protection process, specifically regarding recovery, needs to improve. As production becomes faster, the data protection process has to be able to provide similar performance to applications in their recovered state.

Problem # 5 – Not All Data Needs Flash

Multiple studies show that the overwhelming majority of data in the data center (80%+) is not active. In fact, most of an organization’s data has not been accessed in years and will never be accessed after it moves to an inactive state. As data ages it tends to become unique, there is less need for additional copies, and the efficiency of deduplication decreases dramatically. Finally, there are unanswered questions about the long-term viability of flash storage. Even if flash, thanks to increased bit per cell technologies, reaches parity with hard-disk systems, there will be a need to manage multiple tiers of flash.

As long as flash storage remains more expensive than other forms of storage, it makes little sense to store this cold data on flash. Most data centers have enough capacity or can expand their current hard disk systems to meet capacity demands. Instead of buying an all-flash array and migrating all or even most of their data to it they should invest in a small flash tier and look for a way to automate the movement of data to it.

Solving the Hybrid Array Problems

Assuming the data center’s current hard disk storage systems are not at the end of life, AFAs should be, for now, considered as point solutions, that are used to address specific performance problems. In other words, consider them a hybrid storage strategy. There are two methods to integrate flash into a hard disk based storage architecture. First, the organization can purchase a new storage system that seamlessly integrates flash and disk. The problem is these new arrays have many of the same migration and learning curve problems to purchasing a new AFA. Second, flash can often be added to the existing storage systems but then the storage administrator has to figure out a way to move hot data to that new flash tier. Additionally, this flash tier is isolated to that storage system, meaning that each storage system in the data center will need a unique flash tier.

One alternative is a network based storage accelerator. These accelerators are appliances that have flash storage installed in them and can automatically cache, reads from, and writes to, the legacy storage system. The device is installed inline and only requires a simple configuration change to become active. Integration of the accelerator can be gradual by implementing read-only caching first then turning on write caching as confidence in the system grows.

Solving the Four All-Flash Array Problems

A network-based storage accelerator solves the problems with integrating a new AFA. First, the data center enjoys a similar performance boost as if they purchased an AFA while using existing storage assets. The only concern versus an AFA is if there is a cache miss. Selecting only the volumes that have data worthy of caching, and sizing the flash storage tier in the accelerator to be five to ten percent of data will reduce concerns of a cache miss. Also, some accelerators integrate write intelligence so that large sequential writes are sent directly to disk instead of consuming cache capacity. Finally, a storage accelerator also brings this performance gains to all the storage systems in the environment.

Second, a storage accelerator eliminates the need for data migration–data stays where it is. The accelerator only stores copies of data, not actual data. Even when the appliance is caching writes it only stores unique data for a few minutes. While the administrator does need to learn how to assign an existing volume to the accelerator, no other storage software needs to be learned. Additionally, since all the data stays on the existing storage, there is no need to change any of the data protection processes.

Conclusion

A new all-flash or hybrid array allows an organization to deliver a more fluid interaction with technology than hard disk systems, but if the data center is not ready for a storage refresh the out of budget expense may be too much for the organization to justify. Beyond hard costs, there are soft costs to consider like migration, learning, and integration expenses. All of these add to the total cost of the AFA.

A storage acceleration appliance that transparently caches active data to an internal flash storage tier is a viable alternative, allowing organizations to meet the new performance demands of the data center without breaking the bank or disposing of existing storage investments.

Sponsored By Cloudistics

Cloudistics is driven to democratize IT through next-generation virtualization infrastructure products. This means building high-performance software that enables businesses of all sizes to simplify and optimize their infrastructure. Cloudistics products are built using a design-first, application-centric approach. Their products enable businesses to deploy, manage, and optimize their applications with minimal effort and reduced costs so they can focus on their IP instead of IT. Cloudistics’ Turbine accelerates applications by instantly transforming an existing SAN array to flash performance for 10% of the cost of an all-flash upgrade.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,893 other followers

Blog Stats
  • 1,256,415 views
%d bloggers like this: