Can Object Storage Solve the RAID TCO Challenge?

The cost of storage is important in all data-dependent companies, but in hyper-scale environments like web-based enterprises, it can literally consume the business. With the simultaneous requirements of scalable capacity, reliability and availability, these organizations face serious challenges with storage economics. RAID and replication are the architectures that have historically been used in large storage systems but object storage is being tapped as an alternative that can offer a compelling TCO advantage.

In larger environments such as these, disk failure is a given, simply based on mathematical probability and the number of drives. So the system must be resilient enough to sustain that failure and not suffer a reduction in performance or data availability. RAID has been the standard technology solution to the inevitability of disk drive failure.

RAID-based Storage

Traditional storage architectures use RAID to ensure data integrity within a single storage system and then create multiple complete copies of data to protect against a system-level failure. These extra copies can also keep data available in the event of a RAID rebuild which, for large-capacity disk drive systems, can take days to complete.

These copies may also be replicated to another disk array in a remote location to provide protection against a site-level outage. Many also include a backup copy on tape. This means buying disk capacity for each copy, plus RAID overhead, and the tape infrastructure required. Object storage systems use a different technology to maintain data availability in large storage infrastructures, without having to create numerous copies.

How Object-based Storage Works

Many object storage systems use a technology called “erasure coding” with data dispersion to create an efficient and reliable storage system that can scale to the proportions these industries require. Erasure coding first parses a data object into multiple component parts or blocks; then, somewhat like a parity calculation, it expands each block with some additional information (but less than another full copy) to create a more resilient superset of data.

Data dispersion refers to the spreading of these components of a data set across multiple storage systems. Most object storage systems use a node-based, scale-out architecture that allows these data components to be physically dispersed as well.

Dispersed object storage systems that use erasure coding are able to reassemble each data object or file without having all the constituent segments available based on the expansion of the data itself. This means the system can lose a certain number of these segments and still maintain data integrity. It can lose a disk drive, a storage node can fail, even a temporary network outage can be sustained without putting data at risk or making it unavailable.

Information dispersal produces a system that’s resilient enough to sustain many types of subsystem failures and maintain data integrity and accessibility. But there’s still only one instance of the data set stored.

The Cost of Storage

At the foundation of a storage cost calculation is the simple fact that a larger capacity system will usually cost more than a smaller one. As described above, a RAID-based system stores the primary copy of data on a disk array and then creates additional copies on different arrays. The net of this process is that enough disk capacity to store the original, plus additional systems and capacity for two or three extra copies of data must be acquired, implemented and operated.

In many of the environments that are using object storage, total data stored is in the petabyte range or higher. These are often online enterprises that are storing customers’ data so they have to make sure it’s protected, but the infrastructure must also maintain reasonable access to that data. One example is the photo sharing site, Shutterfly, which has an object storage system that contains over 80PB of data.

A 1PB Example

Web-based services that keep customers waiting to access their data won’t keep those customers for long, so multiple disk-based copies of data are usually the norm. Using the previous example, a typical RAID and replication storage infrastructure large enough to hold three copies of a 1 petabyte data set plus a tape backup could be calculated as follows:

1.3PB of disk storage for 1PB primary copy, (with 30% RAID overhead)

1.3PB of disk storage for secondary on-site replicated copy, with RAID overhead

1.3PB of disk storage for the third off-site replicated copy, with RAID overhead

1.0PB of tape storage for the off-site backup copy

————–

4.9PB of storage, essentially 5x the original data set size

Remote replication could be used instead of tape for the off-site copy, but this adds bandwidth and replication software costs, plus the cost of 1.3PB of disk storage that’s still needed off-site. Two on-site copies could be made, instead of three, but availability could be in jeopardy if one system went down, not to mention the risk of a second system failure.

There are certainly other ways to architect a storage infrastructure with the data protection and data availability needed for these kinds of enterprises. But the point of this calculation is to show how much redundant capacity RAID and replication methods can generate for storage infrastructures supporting hyper-scale data centers. Object storage systems using erasure coding produce a much different result.

Erasure Coding

Information dispersal and erasure coding expands the primary data set by adding redundant segments to improve resiliency. Basically, the level of that redundancy is configurable, typically expressed as a “k of n” formula where n is the total number of data segments created and k is the minimum number of segments required to “bit perfectly” recreate the original data set.

For a 10 of 16 configuration, typically used by companies like Cleversafe, the erasure coding algorithm would expand the original data set from 10 to 16 data segments – a 60% ‘overhead’. While this is indeed more than the 30% estimate used in the RAID example, an object storage system only requires one instance of the primary data set to deliver the same or higher levels of reliability and availability as a 3-4x copy based system using RAID.

The TCO of Object Storage

Using the calculations from above, instead of creating two or three additional copies of data and consuming roughly 5PB of capacity to store a 1PB data set, information dispersal needs only 1.6PB of storage. From a capacity perspective, it’s almost 3x more efficient. But having a lower TCO is more than just using less storage space, it also includes other types of overhead.

When the costs of power, cooling, data center floorspace and personnel are factored in, the object storage system using erasure coding can produce even more dramatic savings. For example, the IT personnel required to run storage systems housing 5PB of data in multiple locations, including a tape backup infrastructure, would be several times that required to support an information dispersal architecture with about 1/3 as much storage capacity. For companies heavily dependent on the cost of storage, this TCO advantage can make the difference between profit and loss.

Summary

For a storage system data protection usually means a replicated or backup copy, sometimes on tape, and data availability means another copy, this time on-line and accessible. For creating and handling all these copies, the traditional RAID and replication approach has worked for years. But now, with hyper-scale environments generating hundreds of terabytes of data, the cost of all these extra copies can price a company out of business. The economics driving these web and cloud-based businesses can’t support it.

Object storage systems, like those from Cleversafe, are helping these companies address the capacity challenges of their storage-intensive infrastructures without generating multiple copies of data. The result is scalable, available and affordable storage with some very compelling TCO savings.

Cleversafe is a client of Storage Switzerland

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,186 other followers

Blog Stats
  • 1,515,601 views
%d bloggers like this: