For most organizations, it makes more sense to use arrays that are hybrid instead of all-flash. Hybrid arrays are more efficient in how they utilize resources and are, therefore, more cost-effective. Intelligent storage software, like StorONE’s S1 Enterprise Storage Platform, can make the efficiency advantage of hybrid even greater while significantly minimizing any disadvantage.
The Concept Behind Hybrid Instead of All-Flash
Hybrid arrays generally take two tiers of storage and transparently move data between them. The primary objective is to save money and use resources efficiently. Typically the first tier is high-performance flash storage, and the second tier is hard disk storage. The ROI of a hybrid system counts on the reality that from most organizations, 80% of their data is inactive. The ROI also counts on there being a significant price disparity between flash and hard disk.
Why Hybrid Instead of All-Flash Failed
Despite the logic behind Hybrid, All-Flash arrays (AFA) are the dominant form of primary storage in most data centers. AFA vendors continue to successfully convince potential customers that there is too much risk associated with using a hybrid system and that the cost savings aren’t all that great.
There is a performance risk when using a hybrid system. When 100% of the IO accesses are from the flash tier, the performance is effectively the same as an all-flash array. The concern is when the application or user requests older data that data is not on the flash tier because of its age. When handling a request for inactive data, most hybrid systems need to recall that data from the hard disk tier and move it to the flash tier. The recall process adds latency because it is retrieving the data from slower media. It also adds latency because it is copying that data to the flash tier. The copy process is not only an extra step, but it is also exposing flash media’s biggest weakness, write performance.
Why All-Flash instead of Hybrid
All-flash Array vendors claim that because of the continuing decline in flash pricing and because of deduplication, there is no longer a financial reason to choose hybrid instead of all-flash. They claim that the unpredictable performance concerns of hybrid arrays outweigh any remaining cost advantage. AFA vendors, though, ignore the fact that the price of hard disk drives is reducing in terms of cost per terabyte. They also ignore the new reality that hard disk isn’t the only option for the second tier of storage.
Deduplication, while bringing down the cost per terabyte of flash, brings a set of “taxes” that make it less cost-efficient than customers are led to believe. First, in primary storage, deduplication is far less efficient than when IT uses the technology for backup storage. Second, there is a performance overhead associated with its use, and all-flash arrays that use deduplication have an inferior cost per IOPS rating. Finally, most all-flash vendors don’t pass the full savings of deduplication on to the customer. The customer receives some of the cost savings value, but not all of it.
Making Hybrid Instead of All-Flash Work
To make hybrid instead of all-flash make sense in today’s data center requires rethinking both the hardware configuration and software design. From a hardware perspective, IT professionals should look for solutions that support QLC flash as the second tier. QLC flash resolves the concern over the performance gap between tier 1 and tier 2. In fact, in most cases, because the QLC tier will have more drives, it likely provides read performance that is the same as tier 1, if not better.
A QLC second tier though requires a more intelligent tiering solution. QLC, while less expensive, does have lower write endurance than any other flash type. The write endurance needs careful management. Intelligent tiering within the software must manage QLC differently.
Managing QLC to Make Hybrid Instead of All-Flash Work
The first way the storage software needs to manage QLC differently is to make sure that it only writes large blocks of data to QLC. It needs to establish a high and low watermark on the primary tier so that when it meets the upper threshold, it moves a large batch of data sequentially to QLC.
The second way the storage software needs to manage QLC differently is to not automatically promote data from QLC to tier 1 on initial access. Remember, the QLC tier likely offers similar read performance to tier 1, so there is no performance advantage to its movement. Also, if data is moved from QLC to tier 1, for reference, but not changed, it will eventually move back down to QLC. The result is an unnecessary write on a tier that is write-sensitive. Instead, intelligent storage software needs to keep data in place on the QLC tier until it detects a write, it needs to capture that write and place just the changed blocks on tier 1. The storage software should be able to manage this in the same way it manages read/write snapshots.
Finally, the tiering technology built into the software should support more than two tiers. Ideally, it should support Intel Optane, NVMe/SAS Flash, QLC Flash, Hard Disk Storage, and Cloud for maximum balancing of performance and cost. It should also tier snapshot data so that old snapshots are not consuming expensive media. In his blog, “Reduce Storage Costs and Risks,” Ittai Doron, R&D Team Lead for StorONE details why a complete tiering solution can dramatically lower storage costs while reducing overall storage risks.
Now is the Time for Hybrid Instead of All-Flash
StorONE’s Q2-2020 release introduces the concept of multi-level tiering and the intelligent management of QLC. The combination creates a new opportunity for hybrid arrays and positions them as the better choice for IT professionals designing storage infrastructure for 2020.
To learn more, please join me, and StorONE’s R&D Team Lead Ittai Doron on a live webinar tomorrow as we discuss and demonstrate all of the new capabilities in S1’s Q2-2020 release.