The Million IOPS Data Center

Storage Switzerland recently completed a lab audit of a test performed by Brocade, Emulex and Violin Memory that achieved over two million IOPS. The report is available now and you can get a copy of it by attending our webinar, “How To Design a 2 Million IOPS Storage Infrastructure“. The most common question we have received since previewing some of the results of that audit is “Who needs that kind of performance?” That’s an interesting question and the answer may surprise you. Specific applications need millions of IOPS right now and many data centers will in the coming years.

The Million IOPS Applications

What applications can benefit from millions of IOPS right now? A high frequency trading (HFT) environment probably tops the list. HFTs are fast paced, high volume financial trading applications that require powerful compute workstations running complex programs that are analyzing a variety of real-time data source and then running sophisticated algorithms against those data sources to make massive financial trading decisions.

These environments require storage systems that have the bandwidth to ingest massive amounts of information very quickly and have the IOPS to execute millions of read/write requests. For the HFT application owner every investment made in storage performance leads to increased trader accuracy and productivity. “Wins” in this market are usually accomplished by being a few seconds faster than the competition.

In addition to HFT environments there are also many data centers that have needs to support scale-up databases. These are typically social media, gaming and other online applications that have to handle high volumes of small packet transmissions to a backend data store. Thanks to the proliferation of mobile devices the storage infrastructures that support these applications must be able to respond rapidly.

Without fast storage I/O the application designers need to add increasingly complex sharing logic into their code to distribute database functions across multiple servers and storage systems. The online application market is very competitive with users who will not tolerate slow response times. A high performance backend storage infrastructure keeps application fragmentation to a minimum and delivers a consistent experience to its users.

The Million IOPS Data Center

For organizations that don’t have an HFT need or a scale-up database demand, the results of testing where millions of IOPS are delivered remain important. Where IOPS used to be measured in application silos, virtualized servers and desktops have broken down these silos. IOPS now need to be measure as the sum total of the data center demand.

In a virtualized environment a single host may be running dozens of performance demanding applications or thousands of desktops. Thanks to VM mobility capabilities those applications or desktops can move to any connected host. VM mobility also drives the need for shared storage. The combination of dozens of applications per server and the mobility of those applications creates the need for a consistently high performance storage infrastructure.

“Wining” in virtualized servers or desktops is all about density. The more VMs you can place on a host the better your ROI. Clearly Intel delivers the needed raw CPU compute to extend VM density, but the more VMs placed on a host the more random the workload becomes. In addition each host is more constantly driving I/O at the storage target. With dozens of applications per host there is no I/O ‘quiet time’, if one application is not accessing storage others are. The infrastructure has to be designed for constant I/O.

The more virtualized our data centers become the fewer physical hosts there will be, but those hosts will be generating as much, or more, I/O than ever. The storage system will become the gating factor in how much ROI can be achieved from the virtualization effort. An infrastructure that can scale to deliver millions of IOPS allows for maximum ROI generation.

Storage Swiss Take

A few years ago, hundreds of thousands of IOPS seemed extreme; now it’s increasingly commonplace. These days millions of IOPS of performance is needed by specific applications. But as virtualization breaks down application silos and consolidates resources high IOPS will be needed by many more organizations since the IOPS load for the entire data center may be centralized on a single storage architecture. It is important that IT planners begin to design storage infrastructures that can affordably address today’s performance challenge but scale to meet the IOPS demand of the not-too-distant future.


George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: