How to design Storage for the Next Generation Data Center

Being a cloud provider is a challenging proposition. Like a utility, a cloud provider (public or private) has to support almost any level of user consumption and maintain that support as demand changes, with little or no advanced notice. These requirements are tough to meet, especially with legacy disk-based storage systems. In a similar way, the internal demands of IT-as-a-service have impacted the way enterprises must consider their own storage infrastructures. Moving forward these organizations will need a new storage technology and the ‘next generation’ data center to stay abreast of demand.

Watch our On Demand Webinar "Storage Requirements for the Next Generation Data Center" and get an exclusive white paper.

More specifically, the storage infrastructure needs to scale almost without limit, plus be extremely flexible, efficient and economical. But first, it must be consistent. Combining the workloads generated by different users or applications from different departments (or companies in different industries) on the same infrastructure is particularly demanding of any shared storage environment.

The storage supporting this kind of data center needs the ability to expand on the fly, be configured for each user and then reconfigured as often as necessary. It needs to be efficiently managed so that overhead doesn’t break the cloud provider business model or the IT budget. And it should provide consistent performance, day in and day out, a guaranteed quality of service (QoS) for users. Unfortunately, that’s not something current generation storage systems can do and the problem starts with the storage media.

Disk Storage is the problem

The mechanics of hard disk drives (HDDs) – spinning disks and floating disk heads – have made traditional storage systems slow, inefficient and unpredictable. The time it takes to rotate a disk platter and move a disk head to the correct position creates latency that severely limits disk performance, particularly IOPS (I/O operations Per Second). This latency is also highly variable, based on the where the desired data is located on the disk platters. And the variability creates performance bottlenecks as data requests stack up since each read/write head can only service one I/O request at a time.

Flash to the rescue

In order to boost IOPS performance, the storage industry has adopted NAND flash and combined it with spinning disk. Flash has no moving parts and therefore minimizes latency in the I/O process; each read or write operation takes essentially the same amount of time. Also, flash chips can support multiple I/O channels allowing flash-based storage to service multiple data requests simultaneously.

But adding flash’s performance to a disk array doesn’t address the fundamental disk latency problem, it’s just a Band-Aid. Augmenting disk performance with flash can help but it also creates more complexity as flash-enabled disk systems with caching and tiering are added to the existing infrastructure. This, in turn, introduces another problem, the cache or tier ‘miss’ that occurs when the required data isn’t on flash when it’s needed.

Incrementally expanding a collection of storage systems in this fashion creates an infrastructure that’s more difficult to manage and one that can’t provide the predictable performance that’s imperative for the next generation data center. So if flash is the medium that can provide the performance that HDDs lack, why not simply build the entire array out of flash?

The Benefits of all Flash

Flash has orders of magnitude better performance, and more consistent performance, than HDDs. Flash devices have no moving parts, no mechanical limitations that can affect performance, and can also support multiple, simultaneous I/O requests. In other words flash has a significant latency advantage. An all-flash array can fully leverage this intrinsic advantage and maintain that consistently, making it an appealing solution for the next generation data center. Flash is also more efficient than mechanical disk storage, consuming ¼ of the power and cooling and 1/10 of the data center rack space, month over month.

An array designed for the next generation data center

Cloud providers (public and private) and enterprises don’t run lab environments, they are real businesses that must stay on top of demand and maintain efficiency or they’re out of business. This means their storage infrastructures have to grow with demand, but only at the pace of that growth. They can’t take a ‘build it and they will come’ attitude; the upfront cost of infrastructure that sits idle is just too high. Also, the longer they wait to make those capital purchases, the less expensive it will be.

Increasing capacity is a foundational requirement for a shared storage infrastructure, but the next generation data center has to do it better. This often means a scale-out, clustered topology where performance can expand as capacity increases. Unlike traditional ‘scale-up’ storage systems where an essentially fixed level of performance is spread over a growing spindle count, each node in a scale-out cluster contains processing power and connectivity, in addition to more capacity.

The single pool of high performance storage created by a scale-out, all-flash architecture can grow to the desired capacity in small increments, so storage can be expanded only as needed. And since each node runs the same operating system with the same all-flash storage, the pool will remain essentially ‘homogeneous’ regardless of how large it gets. This means performance will be consistent as the infrastructure grows and management costs won’t balloon as IT struggles with ‘storage system sprawl’.

In order to assure the size of the pool is adequate, these storage systems need intelligent operational features like simple installation, zero downtime expansion and non-disruptive upgrades. But next generation data centers can’t just rely on storage administrators to work behind the scenes keeping the system up and the tenants happy. They also need automation, such as automatic load balancing, self-healing flash drives and API-driven management routines that can be configured for each user. But in the end, flash-based performance and scale-out capacity aren’t enough. The next generation data center also needs a quality of service capability.

Real Quality of Service

Quality of Service (QoS) requires a sound methodology to control the available resources so that individual hosts will get the performance they have paid for. Instead of simply capping that performance (called “rate limiting”) a method that allows for temporary bursting of demand is much more effective at managing the available performance and still satisfying the users on that shared storage.

But in order to provide true QoS the storage infrastructure needs to be designed to maintain that ‘supply’ of performance regardless of what’s happening in the environment. For example, performance must remain consistent while the infrastructure expands, as described above, with a scale-out architecture and functions like automatic load balancing. Supply must also be unaffected by events like drive failures where RAID rebuilds can severely degrade overall performance. For a system to provide true quality of service, these characteristics must be designed in.

Summary

Providing storage for a modern, next generation data center is tough duty. These infrastructures must provide high levels of performance to mixed workloads in a diverse, multi-tenant environment, and do so consistently as they scale – almost without limit. It’s no wonder that legacy, disk-based storage systems aren’t up to the task and flash-based storage arrays are being added to meet the demand.

Companies like SolidFire have developed all-flash arrays to support the needs of the enterprise and cloud storage providers. Leveraging a scale-out, clustered architecture, RAID-less data protection and performance virtualization these systems have designed in a true quality of service capability that’s essential for the next generation data center.

Watch On Demand

Watch On Demand

SolidFire is a client of Storage Switzerland

Click Here To Sign Up For Our Newsletter

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,229 other followers

Blog Stats
  • 1,541,230 views
%d bloggers like this: