How to Make Reliable SSDs – Reliable NAND Flash

“Reliability” in a storage context means that the storage infrastructure can be counted on to keep data safe and to produce that data in a reasonable timeframe when called upon. While it’s true that storage systems have redundancies built in and mechanisms to maintain operations when a subsystem fails, reliable systems don’t rely on these mechanisms. Instead, they focus on making components more reliable, which in the case of flash storage is the NAND flash itself.

Unlike magnetic media (hard disk drives), the flash media wears after prolonged use, a situation that manufacturers ameliorate in a number of ways. With the prevalence of lower cost flash technologies such as MLC (multi-level cell) in more demanding enterprise applications, understanding flash media reliability is essential for these solid state drive (SSD) vendors. But it’s also important for end users who are considering which SSD to implement.

The solid state drive needs to support efficient read and write processes in addition to safely storing data ‘at rest’. To do this the SSD must protect data the entire time that it’s committed to the SSD, including while it’s actively being handled by the storage controller’s internal architecture.

Data path protection

In order to produce a reliable write (and reliable write processes) the data must be protected throughout the data path, the logical course taken by data as it moves between the host-to-SSD interface and the NAND media during a read or write operation. This doesn’t just apply to host data, but also includes metadata such as the Logical Block Address (LBA), the index entry that provides the information needed to access a block of data.

In most current SSD designs there are buffers that sit in the data path between the different functional structures of the SSD. These first-in-first-out (FIFO) buffers serve to improve performance by smoothing the throughput differences between these individual subsystems. For example, an input buffer holds data as it comes from the host-to-SSD interface transport (like SATA).There’s also another buffer that interfaces with the DRAM which typically holds the addressing information, the LBA table, for each data block that is recorded on the SSD.

Some SSD manufacturers use parity and ECC (error correction code) methods to ensure that no data corruption has occurred as it traverses the various FIFO buffers throughout the data path. Micron, for example, has a technology called “DataSAFE” that generates additional memory protection ECC (MPECC) as soon as the host data comes through the host-to-drive interface and follows that data through the SSD, including the DRAM buffers.

Unlike an additional copy of the data itself, which won’t reveal if the original data is corrupted – or it the second copy itself is corrupted – having another error correction check does increase reliability while using very little storage space. This MPECC data provides an important assurance about the integrity of the data, and the metadata, throughout the DRAM on the SSD until they’re both safely written to flash.

Another important data path protection mechanism is to store the host LBA that the data came from along with the data itself, in addition to recording it to the LBA table in DRAM. The DataSAFE technology actually embeds this critical piece of metadata within the data block it represents so that they can never be separated. This provides another level of data accuracy and protection ensuring that the SSD will return the exact data that’s requested.

Dynamically tuning the NAND

As mentioned earlier, NAND flash cells do change over time and this creates a need to adjust some of the processes that are involved in reading and writing data to flash storage to improve performance and increase reliability. The read process, as an example, should be tuned periodically throughout the lifecycle of the flash device. This means changing the settings on the NAND itself, which are set initially by default, based on the errors the SSD may detect during read operations.

To minimize the impact on host I/O this tuning process can occur in the background by sampling NAND pages, running the ECC and saving those settings which produce the best results. Or, tuning can be run in the foreground, as the host is requesting pages to be read. In the foreground scenario, the controller dynamically tunes the die during a read operation, if the optimal settings have changed since the last time this page was read.

Whether conducted in the foreground, in the background or both – some manufacturers tune the NAND as a background process and during the read operation – the net impact of dynamic NAND tuning is to further enhance the readability of the flash at the die level. This reduces the number errors encountered and improves long-term reliability of the SSD.


To provide an additional layer of longer term protection for data at rest companies may also use a parity scheme called RAIN – Redundant Array of Independent NAND. In principle, the operation of RAIN inside the SSDs closely resembles conventional RAID protection in rotating disk drive arrays. A finite number of user data elements are used to calculate parity, then the combination of user data + parity is stored as a single logical construct.

The specific RAIN implementation is a key element of SSD design and is optimized for multiple factors such as flash characteristics, intended usage model, endurance requirements and controller design. As an example, in a 15+1 RAIN scheme 15 pages of user data plus one page of non rotating parity comprise a RAIN stripe. Each element of the stripe may be written to different planes, dies and packages both to increase drive level performance and to protect user data from a page failure all the way up to catastrophic media failure.

To illustrate, each storage element could be thought of as a data block in a traditional disk array using RAID, the dies could be disk drives and the package could be the drive shelf. Like a disk array that’s designed to lose an entire drive shelf without data loss, this kind of RAIN scheme allows an entire NAND flash package to fail, losing multiple silicon dies, without causing data loss.


Reliability is the ability of a storage device to keep data safe and to reproduce that data on demand within an acceptable timeframe. Producing reliable devices is the cumulative result of many individual design and manufacturing elements. In a storage system this includes the storage media, which for SSDs is the NAND flash itself, as well as an in-depth understanding of how to get optimum reliability and performance from that media. Vendors like Micron are leveraging their vertical integration of the entire flash process to improve reliability and performance through the use of technologies like data path protection, dynamic NAND tuning and RAIN.

Micron is a client of Storage Switzerland

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , ,
Posted in Article
One comment on “How to Make Reliable SSDs – Reliable NAND Flash
  1. […] to correct bit errors, which in turn has extended the useful life of MLC-based products. Some of these technologies use parity-like schemes to improve data integrity as each block traverses the data path inside the […]

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,232 other followers

Blog Stats
%d bloggers like this: