The Performance Realities Facing Deep Flash

One of the more interesting developments in the flash market is the introduction of extremely high density flash systems that can store multiple petabytes of capacity in a few rack units. Once you factor in the power and floor space savings of these systems, they are now well within the price range of even capacity centric hard disk arrays. Although these systems are flash based, often using the same flash as high performance flash arrays they typically offer lower performance.

Comparing Performance Flash to Capacity Flash

A common configuration from an all-flash vendor is to have a high performance system with dual controllers and four network connections. The configuration for the high capacity system is the same (two controllers and four network connections) but it is managing as much as 10X the amount of capacity. Essentially the capacity system is oversubscribed. Assuming full utilization, it has too much capacity for the amount of processing power and network connections it manages. A high performance system is likely undersubscribed, it has more than enough processing power and network connections for the amount of capacity it manages.

For example, a database running by itself on either system will perform almost identically, even if that database is dealing with thousands of access. Why? Database I/O requests are relatively small. While both systems will typically support more than just one database there is a very practical limit, thanks to capacity, to how many workloads can be placed on the high performance system. The high capacity system has no, or at least less, of a limit.

Assuming an average database size of 500GB a 100TB high performance system could only support 200 instances. A 1PB high capacity system could support 2,000 instances. But remember both systems have essentially the same CPU and network bandwidth. If both sets of databases were fairly active then the deep flash system would collapse.

Use The Right Flash for the Right Job

In reality no one would ever actually run 2,000 databases on the high capacity system. It is not designed for that. They are designed to store mostly unstructured data accessible by a finite number of users or servers. In reality it could do that job and still perform more than adequate performance to a few databases or a small virtualized environment. Beyond that, a high performance flash system is the way to go.

It really comes down to choosing the right system for the right job, high performance systems for high performance databases, and high capacity flash for big data, analytics and large unstructured datasets. There is some potential for cross over, especially with high capacity flash arrays but you have to have enough initial capacity to justify the move to high capacity (most have a minimum capacity of 500TBs).

To learn more about high performance and high capacity flash, watch our on-demand webinar “The Bifurcation of the Flash Market“.

Watch On Demand

Watch On Demand

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,957 other followers

Blog Stats
  • 1,326,546 views
%d bloggers like this: