The Modern HPC Storage Architecture

High Performance Computing (HPC) is a unique environment that places special demands on the storage infrastructure. These environments typically have dozens, if not hundreds, of compute nodes, each generating a unique sequential workflow that randomizes when it hits the shared storage infrastructure supporting it. This randomized, sequential workflow is exhausting the capabilities of legacy NAS architectures leading HPC storage designers to seek a new alternative. What is needed is a storage architecture that delivers high performance, the ability to scale for very large environments and is cost effective.

The Modern HPC Storage Architecture

HPC storage infrastructures are almost the exact opposite of the traditional SAN. Instead of using large, scale-up storage arrays, most HPC designs now incorporate a smaller number of file serving storage nodes that support the large numbers of application compute nodes described above. These file serving nodes are clustered to provide high bandwidth access to a physical storage back end.

HPC-Simple Lustre Solution

HPC Storage Architecture with Flexibility & High Performance

Each element in this design – the physical storage server hardware, the software that creates the cluster and enables file sharing, plus the storage devices in those server nodes – needs to provide extremely high performance. This is made up of random I/O performance, generally expressed as IOPS and sequential I/O performance, measured in bandwidth.

An essential element and a key point of focus for the HPC storage designer is the file system that’s used to create the cluster. As they modernize their storage architectures HPC IT planners are trying to replace legacy and proprietary file systems with open file systems like XFS, GFS, MogileFS, Lustre and Glustre. The goal is to create a storage design that is less expensive, higher performing and more flexible.

Flexibility is important so that these environments can adopt technology advancements as soon as they become available. HPC implementations will often leverage higher throughput networking technologies long before the proprietary file systems and vendors will support them. Many, for example, have already integrated 40GbE and Infiniband into their infrastructures even though most storage vendors don’t even have these technologies on their product road maps.

At the same time it’s important for these organizations to not be so focused on the selection of the open storage file system that they ignore the selection of the physical storage hardware. In the HPC environment, potentially more so than any other, the hardware really does matter.

Requirements of HPC Storage Hardware

Since the physical storage hardware is critical to the success of an HPC storage infrastructure there are certain capabilities that HPC IT planners should look for when selecting a solution. Like the environment, these capabilities are unique when compared to those of the typical data center SAN or NAS.

Scaleable, High IOPS

The first requirement is for enough IOPS to handle each of the dozens of compute engines making I/O requests, and the ability to expand that as the demand scales. In the mainstream data center this usually means the use of flash storage, something that may not be an option in HPC environments because the capacity demand is also very high. Too often flash is simply not cost effective enough. Many of the techniques that all-flash system vendors use to drive down flash capacity requirements, like compression and deduplication, are not appropriate for HPC data sets which tend to be highly unique and not compressible.

Despite this ‘capacity reality’ and the inappropriateness of an all flash system, the HPC storage infrastructure still needs to provide high performance, but instead, on large quantities of higher capacity hard drives running at 10k or 15k RPM speeds. The good news is that because of the capacity demands of HPC environments a high drive-count storage infrastructure can still leverage that capacity efficiently.

Flash does have a role to play providing IOPS in the HPC environment but the investment in premium flash storage has to be done intelligently, because of the sequential random I/O problem described above. A caching implementation is the most cost effective approach but that caching algorithm has to be more intelligent than a simple first-in, first-out (FIFO) buffer. It should implement a read-ahead approach to handle the mixed I/O pattens of HPC. A tiering implementation of flash can also be effective if it has the ability to migrate data in real time to adapt to dynamic workloads.

Scalable High Bandwidth

The other problem for HPC environments is delivering high bandwidth for the sequential part of the workload. This can be handled by using bandwidth aggregation across multiple storage systems. The connectivity between the HPC storage node that runs the file system software mentioned above and the physical storage has to be simple and uncomplicated; and it has to be inexpensive. But it should also be networkable and have high bandwidth so that all the nodes can get to all the storage.

The typical approach is to create a backend storage network (a SAN) most often using fiber channel (FC), but this adds a new level of complexity to the environment. Plus, the implementation, tuning and maintenance of the network can become a job in and of itself. A FC SAN also eliminates the advantages of more modern connectivity options like Infiniband and high speed Ethernet. FC has a role to play in HPC but is should be an option, not a requirement.

The SAS Solution

Serial Attached SCSI (SAS) can be an ideal alternative for the storage architecture backend. It provides very high 12Gbps performance without the overhead of IP or the cost and complexity of FC. It’s also networkable. While SAS networking is not as sophisticated as FC and IP, it’s often all that is needed for the majority of HPC environments because the clustered file system is going to manage most of the networking functions.

For high performance applications, SAS connected storage also delivers very high performance. Each SAS port is capable of 4 lanes, each with 12Gb bandwidth so the overall performance exceeds 16Gb FC and 10Gb iSCSI.

The selection of SAS provides the HPC IT planner with an environment that’s almost as easy to support and manage as direct attached storage and one with the raw I/O capabilities to eliminate the need for network tuning. Most importantly, it’s incredibly inexpensive when compared to creating an FC infrastructure. And, if the environment ever needs to include FC or iSCSI some storage systems can provide the ability to make that change without replacing the system itself.

The New Architecture for HPC

An ideal storage architecture would have multiple storage controller units filled with high performance hard drives. Each controller unit would have its own CPU, capacity and cache. The clustered file system would be responsible for aggregating these controller units into a single pool of storage, essentially creating a compute infrastructure focused on storage I/O.

The storage connectivity should be SAS based with each controller unit connecting directly into the storage nodes running the clustered file system. The AssuredSAN 4004 from Dot Hill is an excellent example of this kind of system. It provides a simple, native 12Gbs interface to a common pool of storage for up to four different hosts allowing for extremely high bandwidth transfers and resolving much of the sequential storage access issue. Its combination of cache, dual active/active controllers and high drive count also ensures the rapid IOPS response needed for the random part of the workload.

These new architectures for HPC systems need to take a “software second” approach to design. While the software capabilities of a storage system are critical to the mainstream data center they can often get in the way in an HPC infrastructure. Again, the clustered file system is going to provide most of the intelligence, having the storage system provide similar capabilities simply increases costs and potentially impacts performance. By not having to design and integrate a sophisticated software stack, providers of these types of storage systems can focus solely on performance and flexibility.

In a hardware-oriented environment like HPC, flexibility is a key feature for storage arrays that don’t have layers of services like virtualization. The ability to support multiple client interfaces and storage device types in different combinations enables HPC users to ‘hard wire’ their storage arrays to match their current workloads. And then when projects change these assets can be reconfigured to support new workloads.

Systems like Dot Hill’s AssuredSAN 4004 provide this kind of flexibility by supporting multiple channels of 16Gb FC and 10Gb iSCSI, in addition to 12Gb SAS. These modular arrays can be populated with performance drives, capacity drives or SSDs – or a combination.

Dot Hill is a client of Storage Switzerland

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Article
One comment on “The Modern HPC Storage Architecture

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,142 other followers

Blog Stats
  • 1,485,248 views
%d bloggers like this: