What is Extreme Performance?

The modern data center has to support many different types of workloads, each of which makes different demands on the storage architecture. Today, standard all-flash arrays (AFA) are the mainstream storage system for the data centers and conventional best practice is to place all active and near-active workloads on the AFA. However, there are an increasing number of workloads where the performance of the AFA is not enough, and in these situations,  organizations need to look to an extreme performance storage system that enables the workload to scale further to maximize hardware utilization and satisfy the expectations of users.

Extreme performance workloads are no longer corner use cases required by only a few organizations. These workloads are also not limited to the so-called modern applications that drive analytics, artificial intelligence (AI) and machine learning (ML). They also include traditional workloads like online transaction processing (OLTP) database applications and real-time trading applications.

Each of these use cases, both modern and traditional, demand rapid access to typically large data sets. The modern use case may need to scan billions of small files stored in a file-system and the traditional workloads may need to scan millions of records in a structured database. A traditional workload example might be an Oracle Database that is scale-up in nature. Carving this application up to spread out the processing and storage IO loads is complicated and leads to reliability issues. Ideally, the organization wants to buy enough compute, networking and storage performance to enable the database to remain scale-up but still meet the performance expectations of the enterprise.

Some modern workloads such as deep-learning file based, are scale-out in nature and are designed to have the application running on multiple nodes. When designing these architectures IT planners must typically choose between using storage directly attached to each node to lower network latency or shared storage which is more efficient but introduces network and storage system latency. The organization finds itself in a catch-22 of sorts; it must either sacrifice the efficiency of shared storage or the low latency of direct attached storage.

Extreme Performance Architectures

Generally, there is more than enough compute and networking capabilities to meet performance demands for both modern application and traditional application use cases. The problem is traditional storage systems, even AFAs, can’t deliver the consistent high performance and consistent low latency that these environments require. The inconsistent performance is a result of the storage system vendor using non-optimized common-of-the-shelf-servers (COTS), which are essentially servers and are not internally designed for high performance storage. An extreme performance system is purpose built for the task of delivering high performance and consistent low latency. Its internal connectivity is optimized for memory-based storage IO and adds almost no latency. These systems also have enough dedicated processing power to deliver enterprise class data services that the organization has come to count on. Finally, because it is built from the ground up for memory-based storage, extreme performance systems also often deliver extreme capacity, fitting petabytes of storage in a few rack units.

In most situations the extreme performance systems work with existing AFA systems, they don’t replace them. In our next blog Storage Switzerland will explore high-performance AFAs, their use cases and how they can work with extreme performance solutions to deliver a cost-effective environment for most production workloads.

In the meantime, sign up for our on demand webinar “Flash Storage – Deciding Between High Performance and EXTREME Performance.” In this webinar we detail the high-performance and extreme performance systems, the use cases they address, how to decide which workloads should go on which system type and how IT planners should integrate them.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,227 views
%d bloggers like this: