A Better Answer than RAID and Replication for Cloud Storage

Cloud applications and web-based services, such photo sharing sites, create some monumental challenges for a storage infrastructure. Not only must they expand almost without limit, they also need to protect and secure data sets, typically for thousands or millions of users. And, they must accomplish this while aggressively containing costs so as to stay viable in these hyper-competitive environments.

RAID and Replication

The problem is that legacy RAID-based storage systems are essentially inefficient, especially at cloud-scale proportions. In order to maintain data integrity and availability to large user bases, a typical cloud storage infrastructure using a RAID-based system would have to create multiple copies of each data set and distribute those across multiple storage systems – even multiple data centers. This process, generally called “replication” can take several forms, from simply copying an entire volume between arrays to more sophisticated object-level replication.

Some of these technologies use incremental change tracking to reduce the amount of duplicate data created. They can also set protection policies at the object or file level (how many copies made and where they’re stored) based on the data’s importance.

The point is that while some technologies are more efficient than others, in general, replication creates more copies of a given data set. This may not be an issue when the data sets involved are ‘only’ in the tens of TBs, but it can be a different story when data gets into the PB range. In these situations the storage space, bandwidth and processing time that’s often required to support replication may become unacceptable to the web-based businesses that make up these demanding industry segments.

This article will detail object storage systems with erasure coding and data dispersion, and how this new technology may address these shortcomings. But first, a discussion of traditional storage systems is in order.

How Traditional Storage Works

Traditional storage arrays use a RAID architecture to protect data and maintain availability when there’s a problem, typically data corruption or a drive failure. Data is spread across the drives in the RAID set with parity information added to the data in order to facilitate its recreation when a drive fails. This parity data expands the RAID set by 25% to 30%, typically.

When a drive is lost, the remaining parity information is used to recreate that drive. This is somewhat of a ‘brute force’ process as it must rebuild the entire drive regardless of how much capacity was being used; and the larger the drive the longer the process. On high capacity drives, common in cloud applications, this process can take hours or days, depending on the size and performance of the drives, the processing power of the RAID controllers and the activity level of the system. During this rebuild, the system is vulnerable to another corruption event or drive failure (assuming a single parity RAID scheme).

Unfortunately, this second failure is also more likely during a rebuild, since every sector and data block of every drive is read. Under RAID 5 if a second drive failure occurs, all data is lost and recovery from backup must take place. Under RAID 6 it takes a three-drive failure to lose data.

Also, during this time, while the users and applications can still access data, the responsiveness of the storage system is often degraded, in some cases to the point that it’s unusable. The rebuild can be set to a lower priority which will help performance but slow the recovery effort, further exposing the environment to a total data loss.

To protect against this, most systems will be configured to make another copy of this data set on another array, often known as RAID 10 (mirrored RAID 1) or RAID 50 (mirrored RAID 5). This second copy of data, which is typically replicated by the disk array controller, represents a 100% capacity overhead cost in addition to the 25% overhead for the RAID implementation. What’s more, as the data set grows, more disk drives and larger drives are typically used in order to minimize capacity costs.

As mentioned, using larger drives extends the time required for RAID rebuilds, but using more disk drives also increases the likelihood of a drive failure, which is a mathematical probability. The response to this increased risk is to make yet another copy, often on tape as well, adding even more cost.

As previously described, this RAID and replication method for producing data resiliency can end up creating a 200-300% capacity overhead. And, the effectiveness at providing that protection and availability is suspect, given the inefficiency of copying and replicating data across multiple storage systems. This adds significantly to the cost and can be almost prohibitive when used with very large data sets, as are typical with large public and private cloud infrastructures.

Erasure Coding

Companies like Cleversafe take a different approach to protecting data and maintaining its availability. Their erasure coding and data dispersion process, called “information dispersal”, first parses a data object into multiple component parts or blocks. Then, somewhat like a parity calculation, it expands each block with some additional information to create a more resilient superset of data.

With a mathematical algorithm operators can use this superset to recreate the original data set using fewer than the original number of data blocks. Compared with traditional disk arrays using RAID, which can lose only one or two disk drives, information dispersal can provide many more times the level of resiliency within a single disk array. And many systems allow users to configure this level of data resiliency by setting the percentage of data blocks that must be present in order to successfully reproduce data.

It’s not unusual for these systems to endure the loss of a dozen or more blocks without losing data, a level of protection that traditional RAID arrays certainly can’t match. But when compared to multiple RAID arrays with replication, the contrast gets even more striking.

Using information dispersal, data blocks can then be distributed or “dispersed” across multiple disk arrays or storage systems in a single data center – even across multiple data centers – to give the protection that RAID and replication-based architectures were meant to provide, but without generating two or three (or more) additional copies of data. This also eliminates the processing and bandwidth overhead generated by replication.

The net of this process is that object-based storage systems like Cleversafe’s Dispersed Storage Network (dsNet **) equipped with information dispersal technologies can deliver better data resiliency for some very large data sets, than can traditional storage systems which use RAID-based architectures. And they can do this much more efficiently as well.

While erasure coding’s “data expansion” does increase storage capacity consumed, it’s typically in the 30-40% range, compared with 100-300% for RAID and replication methods described above. The savings this can generate include power, cooling, storage system floor space and storage infrastructure management, not just raw disk capacity.

Geographic Dispersion

Object storage with erasure coding provides another benefit as well, one that’s especially valuable in cloud and web-based applications which handle very large amounts of data stored in multiple physical locations. When replication is required between data centers, like for DR purposes or content distribution, geographic dispersion can spread this same superset of data blocks between physical storage systems around the country, or around the world. Compared with RAID and replication schemes, this process significantly reduces the amount of data that must be copied and transported, producing dramatic cost savings in storage capacity, processing power and bandwidth consumption.

Summary

The scale of storage expansion required by the cloud may be rendering traditional RAID and replication storage infrastructures obsolete. The latency introduced with large RAID rebuilds is unacceptable and the capacity, processing and bandwidth overhead these legacy systems need is just too costly. For use cases such as Software as a Service (SaaS), social media other large, web-based applications, object storage with erasure coding and data dispersion can be the enabling technologies that eliminate RAID’s inefficiency and provides the economics required to make these business models feasible.

** dsNet is a registered trademark of Cleversafe, Inc.

Cleversafe is a client of Storage Switzerland

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , ,
Posted in Article
One comment on “A Better Answer than RAID and Replication for Cloud Storage
  1. […] There are other attributes that object storage brings to a data center beyond being able to manage lots of files and lots of capacity. Object storage typically has better and more advanced data protection capabilities than legacy storage systems equipped with RAID. My colleague, Eric Slack has covered that extensively. […]

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,243 other followers

Blog Stats
  • 1,561,898 views
%d bloggers like this: