John Scaramuzzo, Senior VP and General Manager at SanDisk, had the second keynote at Storage Visions 2014 in which he put forth the notion that an All-Flash data center may happen sooner than we think. There is no doubt in my mind that flash is now the storage medium of choice for almost any form of active data and data centers are moving in that direction rapidly. What surprised me was John’s assertion that capacity storage for near-active data will move to flash in the next three years. This is a compelling idea that’s worth exploring.
Performance is Flash
Today, there is little argument that flash is THE storage medium for performance sensitive data. In my opinion the continued decrease in flash storage costs, combined with the improvements in flash durability and management, have made flash-based storage the right choice for any type of active data. In fact, my general recommendation is that if you are adding drives to a legacy storage system to solve a performance problem you should stop and look at flash options first.
SanDisk is doing its part to push the performance envelope even further with its Memory Channel Architectures. We detailed these in a chalk-talk video that John and I did recently:
The part of John’s presentation that got me thinking was the value of flash storage in a more capacity-centric, tier-2 storage environment, basically near-active but not ‘cold’ data. This traditionally is the domain of mid-capacity hard drives, where storage must perform well but still provide an excellent cost per GB. Today, and for the foreseeable future, hard drives still win if all you look at is the acquisition price per TB.
The Total Cost of Capacity
However, what if you considered the cost of capacity from a total cost of ownership (TCO) perspective? What’s the cost to provide floor space, power and cooling to a server or group of servers (more common in the cloud environment) filled with 2TB hard disk drives? When you look at the TCO of that capacity, drive density becomes a factor. The problem for hard drive technology is that as the density increases throughput speed generally declines.
Flash Solid State Drives (SSD) might actually be the better and more cost effective answer. Today companies like SanDisk make SSDs in drive form factors of 1.8” and 2.5”, and are promising capacities of greater than 2TB in the near future. In fact, John more than hinted at the availability of 16TB 2.5″ drives in the next few years. However, unlike HDD technology, the per drive performance of these much higher capacity SSDs will be the same as their current offerings, which is of course already much better than with HDDs. As a result you could end up with a storage server that can handle 4x to 8x times the capacity available from existing physically larger HDDs, at many times the performance of a comparable HDD system. This would mean fewer servers more densely packed with SSD drives that require less cooling per drive. And, it could actually be cheaper than a hard disk system of comparable capacity.
Storage Swiss Take
The problem with basing decisions on ROI or TCO tables is that the math actually has to work in your specific environment. But for the right type of data center, large enterprises and cloud providers in particular, the math could easily work out. Creating an SSD-only data center for tier 0 through tier 2 may actually be more cost effective and it certainly would be more responsive to user requests.