The Problems with Storage Performance Benchmarks

There are a number of storage performance benchmarks that provide a standardized way to compare storage systems from various vendors. The problem is there are few, if any, data centers that have a workload that is exactly like those benchmarks. Every data center is unique and has a unique combination of application workloads and that places a unique requirement on its storage infrastructure.

The Money Problem

Most benchmarks are about speed, how many IOPS, transactions or files can they process in a given time period. The competing vendors put together configurations that no organization could afford or that no IT professional would configure. While some benchmarks require the vendor clearly state the configuration and the cost of the system, this information does not relate to how organizations budget and plan for storage system purchases.

Most storage purchases are based on a pre-assigned budget. IT professionals need to determine how much performance can they get for X amount of dollars. That’s why when RFQs (requests for quotation) are sent out, they don’t ask how many IOPS can the system provide no matter the cost. They set a price; what can the system deliver for a $200,000 budget?

How Concerned Should I Be About Performance Today?

Another problem with basing a decision on benchmark data is it isn’t relevant to the data center’s problem. A mid-range all-flash array will, for the overwhelming majority of data centers, likely eliminate all of their performance problems – for now. Storage Switzerland calls this the performance bubble. Like all bubbles it will burst as more workloads are added to the storage system and as developers start to take advantage of an all-flash infrastructure.

IT planners need to know when their environment will burst that bubble, with their workloads. They need to be able to, with their workloads, turn dials to see the impact of increased users, transactions or workloads on the storage system. They need to know their performance limits today to avoid business-impacting surprises down the road.

How To Eliminate Benchmarks?

The problem with eliminating benchmarks to help decide what storage system to buy is that IT needs to replace the benchmark with something else. Few organizations can afford to have a test lab that is roughly equivalent to production so they can test new storage systems on their workloads. There is the sheer cost of that kind of a test environment and there is the time it takes to create the test environment to run the workloads on. Testing becomes a full-time job, which again, most organizations can’t afford.

An alternative is workload modeling. In this scenario, the IO profile of workloads are captured over a period of time (hours, days, weeks) and then “played” against the storage systems. Instead of a test lab the data center only needs a workload generation appliance, which represents a number of high performance servers running the workload model, and the storage system being tested.

Workload modeling allows testing to be a very simple process that can test hardware solutions on an almost continual basis. It also should provide “knobs” that allow the workload to be “turned up” to simulate IOPS, workload changes or user growth.

StorageSwiss Take

Benchmarks are interesting but when it comes to evaluating storage systems for a specific application or set of applications, they are not of much real value. What matters is how much better the organization’s workloads will perform with the new storage system as compared to the current one. Test labs can be expensive to buy and maintain, as well as require too much IT administration time to keep current.

Workload modeling, on the other hand, is a cost-effective way to not only test new systems but continuously test current systems to understand what their limits are and what the impact of new workloads will have on them.

To learn more about workload modeling check out our on demand webinar, “5 Steps To The Perfect Storage Refresh”.

Watch On Demand

George Crump is the Chief Product Strategist at StorONE. Prior to StorONE, George spent almost 14 years as the founder and lead analyst at Storage Switzerland, which StorONE acquired in March 2020. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Prior to founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,785 other followers
Blog Stats
%d bloggers like this: