All-Flash Arrays, once thought of as the storage system for certain high performance use cases, are now the mainstream primary storage system. The best proof point of all-flash dominance comes from vendors that sell hard disk, hybrid (flash and hard disk) and all-flash systems. Most primary storage vendors now report the sale of the all-flash system option is number one in their product mix. Chances are high that if an organization’s primary storage tier is not all-flash already, it will be soon. After the data center is all-flash, what’s next?
The challenge for most all-flash vendors is what will they do for an encore. Most of their mid-range products provide more performance than the typical data center needs, let alone their high end products. Certainly workloads within the data center will continue to require more and more performance, but the industry is reaching the point where good enough is good enough for most environments.
Compare the primary storage problem to purchasing a personal laptop or desktop. For the first decade or so of their existence, the speed of the system was a top concern for the buyer, as was the speed and capacity of the hard disk drive. Now, though, these concerns are taken care of by an abundance of processing power where the average Intel CPU more than meets the needs of the typical user. The same can be said about the storage. Once SSDs become commonplace in end-user systems the debate of performance was over, as well as capacity, for the most part, as users augmented what they had with the cloud.
Today users buy their devices for other reasons; quality of the screen, battery life, weight and thinness of the system itself. Even in some of these areas we’ve seen most users get to a point where good enough is good enough. For most users, 10 hours of battery life, two to three pounds of weight and an HD display does the trick.
Primary storage, once it goes all-flash, sees much the same cycle. Performance is generally good enough for most use cases. Certainly there are exceptions, as they are with user devices, but for the average workload we are at a good-enough standpoint.
What Should Flash Vendors Do Next?
Over the next year or so most flash vendors will focus on NVMe and NVMe over fabrics. NVMe is a storage memory specific protocol which enhances the performance of flash storage. The network version of it, “over fabrics,” is available on both Ethernet and Fibre. NVMe will put the nail in the coffin when it comes to performance as most systems will generate 500k IOPS+. For the average data center these systems will provide all the performance they will need for years.
As NVMe-based storage comes to market, the original hybrid array vendors may have an advantage. They can design systems that are part NVMe and part SAS. Just how these vendors used to keep active data on flash and dormant data on HDD, they can now keep active data on NVMe flash and dormant data on SAS SSDs. Give the price premium on NVMe architectures, this should provide hybrid array vendors a price advantage for a while.
NVMe over Fabrics will lead vendors to develop abstracted storage, what some vendors will call “composable storage.” These designs will be scale-out in nature but not the typical linear scale-out that we’ve become accustomed to. Instead, these systems will be more of a mesh where storage compute (controllers) and storage shelves (capacities) are attached to the network and then allocated, in some cases dynamically, to specific workloads. The result is an environment that scales in either direction (performance or capacity) that the data center needs, and a design that guarantees specific performance to applications that need it.
Scale-out storage brought the concept of manage multiple storage nodes as a single entity via a cluster. Abstracted storage takes another step in that direction, enabling divergent paths for capacity and compute. The next step is for the storage mesh to be manageable across locations so it is possible for a single person to manage all storage in all locations as a single entity.
Part of the distribution of storage also includes the cloud, which means storage vendors will need to extract their software from the hardware and be able to run that software on public cloud compute. The ability to run traditional storage software in the cloud enables organizations to use the cloud as a replication target. If the storage vendor can expose those systems to cloud compute, the DRaaS market changes from a niche with a few vendors to a feature that every storage system will include.
The ability to replicate to the cloud and leverage cloud compute also means organizations could more easily leverage the cloud for bursting. Armed with that capability, IT planners could design data centers for typical workload requirements instead of over-buying for worst case scenarios. The net result could be a dramatic reduction in IT spend.
Copy Data Everywhere
There are several stand alone vendors that provide copy data services, which is essentially the advanced management of snapshots. Most vendors have snapshot capabilities within their storage system and most also can present those snapshots as writable volumes to another application.
Copy Data management typically comes in the form of indexing, programmability and automation. The Copy Data vendor will create an index of what the snapshot contains to make finding data easier. The vendor will also provide the ability to automate the attachment of snapshots to the applications needing them.
For example, the development team may want a refreshed copy of production data every 30 minutes. Copy Data management solutions will make sure a snapshot is taken every 30 minutes and replace the old data set with an updated copy as needed.
There is no reason to think storage vendors can’t create a Copy Data management solution of their own built right into the storage system. Indexing software is already available and many systems have programmability built into them. Extending that programmability to include the ability to automatically mount certain snapshots to specific applications on a regular basis is not outside of their reach.
Tier To Another Storage System
A final feature that every storage system vendor should consider is the ability to tier to another storage system. This is different than a mesh, the design should be to tier to another storage system that is independent of the current one. Ideally, if the originating system is an all-flash array then the alternate tier should be a secondary storage system or even an object storage system. The all-flash array should have the ability to make a copy of its data to the secondary storage system via snapshot technology as well as to tier older data that no longer requires flash performance.
Storage performance is quickly becoming a commodity. As that occurs, storage system vendors need to look beyond performance and deliver key functionality that fits into the architecture of the modern data center and meets the needs of the modern organization.
The goal should be to lower administration time and to reduce cost. While improving overall storage IO performance should not be ignored, system vendors would be well advised to slow down on performance improvements and concentrate on features that the data center needs to meet the demands of the organization beyond just performance.