The Challenges with SSD Caching and Tiering

It is universally accepted that for most applications, adding flash to the storage infrastructure will greatly improve storage performance and offer a productivity return on the investment. The controversy in the industry is deciding where the flash storage should be used. Legacy vendors seem bent on providing it as an augmentation to the hard disk systems they have sold for so long. And new, upstart storage vendors are providing flash-only systems that relegate hard disk technology to a backup/archive function. Determining which implementation is right for your data center is critical as you consider your next storage buy.

Flash Implementation Options

There are two basic options when considering flash-based storage. One is to augment the use of hard drives by using as little flash storage as possible to keep costs down, while striking a balance in performance. The other is to replace hard disk storage with a flash-only system. While a hard disk system would likely stay in these environments its purpose would be more as an archive or backup to the primary storage, which would then be all flash.

Flash Augmentation

The Cost Advantage

While their methods may be different, systems that use flash as either a cache or as a tier have the same basic goal. They aim to leverage flash to improve performance of the most active data sets while using hard disk to store less frequently accessed or lower priority data. In order to keep costs down, these systems are designed to use as little flash as possible, providing a performance boost to only the most active data sets. An increasing number of systems also leverage this small flash area to improve the performance of inexpensive, but higher capacity, hard drives to further bring down the cost of a storage system.

However, this sole, but important, advantage of cost reduction brings with it a number of challenges. They rely on complex processes to keep the ‘right’ data in cache or on the tier, a complexity that may compromise the very performance and cost effectiveness that was promised.

The Analysis Problem

The first challenge is that each of these designs must factor in some sort of performance analysis so that a determination can be made which data is “flash worthy”. This creates a “warm up” time concern, or the time it takes for the data to be approved for the SSD tier. It typically means that a certain amount of accesses need to occur within a given time frame in order for the data to be moved from the hard disk tier to the SSD tier.

This analysis can be a processor intensive task that needs to happen in real-time. Legacy systems, which have had to add this analysis after the fact, may not have enough internal CPU strength to handle the additional load. In either case, it’s yet another task that the storage controller has to do and manage.

Another challenge of these analysis routines is that a great majority only analyze data that’s already on the hard drive tier, meaning that all net new data is not analyzed and does not go to flash first. Most systems send net new data to the hard disk tier first and then have to wait for that data to be promoted back up to the flash tier. Usually, the next most likely accessed data is that which was just written. Also writes are the most performance demanding operation, and sending them to the slowest tier simply makes their resource consumption greater.

The “Miss” Problem

A second challenge is the impact of a cache or tier miss, meaning that data is not in the cache and needs to be accessed from the hard disk tier. Remember that in these environments the cache or the SSD tier is purposely kept small to save costs. A small cache means an even greater chance of a cache or tier miss. Cache unfriendly environments are not the sole domain of large sequential workloads. Misses can also occur in large, virtualized environments where the amount of randomness of data access is very high.

The result of a cache miss is that the data being requested has to come from a hard drive tier, which is much slower. This can lead to user dissatisfaction because of unpredictable performance. Sometimes their accesses are fast (cache hit) and sometimes they are incredibly slow (cache miss). Caching or tiering systems that attempted to use low-cost, high-capacity drives to contain costs makes this problem worse because a cache miss results in the slowest access possible.

This unpredictability leads IT administrators to ask for larger cache systems, increasing costs. It also requires them, if the ability exists, to lock certain data sets in the SSD tier, which also leads to larger SSD tiers – and more cost.

For these augmentation types of solutions to work they either need more exact analysis or a larger cache area. More exact analysis involves tighter integration to both the storage system and the host operating system and most likely even more processing power. The more common option is to increase the size of the SSD tier. This of course, adds cost and defeats the strategy of a tiering/caching implementation which was to decrease the use of flash storage.

The Data Movement Problem

Once the analysis process is complete another challenge with a caching or tiering model is the demand of physically moving data back and forth to between the SSD tier and the HDD tier. In both models data being promoted must be written to that SSD tier as it becomes qualified. For data that needs to be moved out of the SSD tier, caching systems that are “read only” can simply evict the data and don’t have to write it to the HDD tier. Tiering systems, because they move, not just copy data to the SSD tier, have to also copy that data back to the HDD tier when it becomes dis-qualified.

The back-end copy traffic in a busy system can add significant overhead to a storage controller. One solution is to move to a read-only cache design and cut 50% of the back and forth traffic. However, this also reduces performance. Alternatively, storage system vendors would have to design systems with larger backend bandwidth and more powerful processors so that this data movement does not impact performance. Such a solution would further reduce the cost effectiveness of using SSD to augment the HDD tier.

The Write Problem

A final concern is that this may be the worst way to use flash-based storage. All flash storage has a finite life expectancy, meaning it can only handle so many writes before it wears out. This is made worse in an environment where there is high data turnover, like storage systems employing a tiering or caching strategy. This continual movement of data generates a significant amount of extra write cycles which shortens the useful life of the flash devices.

Not only do these environments have higher write traffic, they have a tendency to completely use all the capacity of the flash tier, repeatedly. This is problematic because for a flash-based storage device to write data, it must first erase the old data in the place it is about to write to. Unlike a hard disk, when flash erases data it does so by writing zeros to that cell. This is often called write amplification. The impact of it is that after a flash device fills up the number of writes per write is doubled. Any storage system using flash is susceptible to these write concerns the situation is compounded in environments using flash as a tier or cache. With these systems the number of writes and the likelihood of a full state is very high.

Most flash devices are rated in terms of the number of years that the device is expected to last. In this calculation, the manufacturer assumes a “normal” write pattern of an environment, not the increased write activity generated by caching or tiering processes. These are not the types of write conditions that were used when calculating drive life. As a result the drive wears out prematurely.

The solution for the legacy storage vendors that use augmentation is to use a higher grade of NAND flash in their systems like SLC. The problem is that SLC flash stores half the capacity of MLC flash and costs much more. Once again the complexity of an augmentation strategy designed to cut cost by limiting the amount of flash storage needed can actually increase costs in other areas.

All Flash Storage

The alternative is an all-flash storage system like those offered by Pure Storage. All-flash systems are sharable solid state storage devices that give many of the features that one would expect from a legacy storage system. But, they end all the complexity with tiering and caching as described above. Everything is on flash, there is no data movement and no data analysis. All storage I/O, writes and reads, are to flash. This results in very high and extremely predictable performance.

As Storage Switzerland  details in the  article “Overcoming the All Flash Cost Challenge“, all flash systems like those from Pure Storage leverage their simplicity to leverage lower cost Flash NAND and integrate space efficiency options like deduplication and compression.

Pure Storage is a client of Storage Switzerland 

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,238 other followers

Blog Stats
  • 1,552,057 views
%d bloggers like this: