The HPC Storage Challenge

High Performance Computing (HPC) creates interesting challenges for the storage infrastructure. First, storage systems supporting HPC have to meet increasingly higher performance demands. High performance HPC storage enables more simulations and deeper analytics across a wider set of data. Leveraging a scale-out all-flash array can meet many of these performance requirements. The greater obstacle, however, is cost effectively dealing with the capacity challenges HPC places on the storage infrastructure.

Can Flash Do it All?

Some flash vendors claim a single flash system can meet both HPC performance AND capacity demands. Most scale-out flash systems should be able to scale to meet the performance demands of the typical HPC environment and, at least initially, should be able to scale to meet the capacity demands. Remember scale-out storage systems count on parallel workloads in order to deliver on their performance claims, something that the HPC environment tends to provide. The result is, technically, an all-flash, scale-out storage system could deliver both the capacity and performance HPC needs. The bigger question is can the organization afford to invest in all-flash for all of its HPC data, especially long term?

Justifying Hard Disks for HPC

For many HPC workloads there is time to move a data set from cost-effective, high capacity storage to high performance storage. If that data movement is simple and quick enough then the organization can see a significant cost savings by moving data not currently under modeling or analyzation to a scale-out, object based storage tier. Some object storage systems can add power savings capabilities, spinning down hard disk drives and even powering down nodes within the storage cluster.

An object based capacity tier also allows the primary HPC storage tier to be more flexible. Since it will deal with a much smaller, active data set only, the primary tier could use high performance scale-out all-flash NAS or block storage that effectively acts as a cache to the back-end object storage.

Creating a Bridge

The key is to create a bridge between an high performance HPC storage tier and its capacity tier. This means the organization either needs to manage the movement between tiers manually, invest in software that manages data movement between them or look for a storage system that can automatically move data between the two tiers. The effectiveness of this bridge is critical to the HPC seeing maximum benefit out of the capacity tier and enables them to keep the investment in the premium flash tier to a minimum.

Dive Deep on HPC Storage Infrastructure Design

Watch on our on demand webinar with Storage Switzerland and Caringo for a HPC-focused webinar. We dive deep on HPC storage infrastructure design, discuss the pros and cons to the various option in HPC storage infrastructure and provide key steps in developing a strategy that makes sense, not only for HPC but the rest of the enterprise as well.

Watch On Demand

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,861 other followers

Blog Stats
  • 1,233,261 views
%d bloggers like this: