Is 10GbE Fast Enough For Flash Storage?

In a recent Storage Switzerland report (commissioned by Brocade and now available for download), we point out that there is a gap between high performance compute and high performance storage — the storage network. Workloads like server and desktop virtualization, high frequency trading and scale-up mission critical databases are demanding more I/O than ever. The storage media, thanks to flash, is ready to respond. Gen 5 Fibre Channel is also ready. The question is, are traditional IP networks based on NFS and iSCSI ready to handle this performance demand?

Most IP networks will upgrade to 10GbE eventually. In fact chances are they will do so sooner than most 8Gbps Fibre Channel networks. Part of the reason for this is how inexpensive 10GbE connectivity has become. The other reason is that many IP based storage networks are a bonding or trunking of multiple 1GbE ports, making the situation more desperate. But customers looking to upgrade to 10GbE–and even those that have already done so–need to consider whether 10GbE is fast enough for flash based storage, and especially all-flash arrays.

The Market Issues

From a market perspective, the overwhelming majority of all-flash systems are being connected to Fibre Channel. Most vendors won’t publicly report their numbers but in a chalk talk video that we did with Violin Memory Systems, they stated that 90% of their systems are connecting to Fibre Channel environments. In fact, that video was summarizing a recent test we did with Violin to verify a 2 Million IOPS plus benchmark result and they specifically chose Gen 5 Fibre Channel to get the job done.

In fairness, some of these vendors like Pure Storage (edit:actually Pure does have iSCSI) and IBM (Formally Texas Memory) don’t have an IP/iSCSI option. But there is a reason for that. These vendors chose Fibre Channel because they knew they needed a predictable, high performance infrastructure for connectivity. Gen 5 Fibre Channel, thanks to its 16Gbps bandwidth and lossless architecture, provides that.

The IP Efficiency Issues

There are some technological reasons for not choosing IP as well, some of which we point out in the report. The primary issue is that IP is not a lossless architecture. Due to IP transmission re-tries, it is hard for IP to provide predictable performance in highly performance sensitive environments. As we discuss in the report, there are also some protocol efficiency concerns with NFS and iSCSI as they have to manage TCP traffic and perform various protocol conversions. In a hard drive based storage world which was full of latency, these areas of inefficiency were probably less noticeable. In a memory based storage infrastructure, however, that could be an area of top concern.

Another issue is scalability. As an IP network grows, the issues associated with Network Layers and Spanning Tree Protocol (STP) become a challenge. STP is an aging network standard created when networks were connected via simple hubs instead of the switches Ethernet networks use today. The purpose of STP is to make sure that there are no loops in the network which would put it into an endless ring and would cause it to hang. To do this, STP makes sure that there is only a single active path to each network device. It does this by shutting down any alternative paths.

The problem is that a loop-free environment wastes about 50% or more of the available network bandwidth. With very inexpensive 1GbE networks that was less of a concern, but with the emergence of 10GbE, the amount of bandwidth per deactivated link being wasted is more significant and costly.

Ethernet Networks Have A Role in Storage

Just like any other technological shortcoming, the issues with Ethernet can be worked around. TCP and translation overhead can be off-loaded to specialized network cards with processors designed to take on this load. Ethernet Fabric technology leveraging TRILL (Transparent Interconnection of Lots of Links) is beginning to take hold in larger data centers that are fully committed to IP. There is also Fibre Channel over Ethernet (FCoE) which provides a Fibre Channel like experience to Ethernet topologies. Lastly, there is hope in some of the software defined networking solutions which integrate directly into the storage system to allow for more optimal traffic handling.

For mid-sized data centers, the performance concerns caused by IP overhead and scalability issues caused by STP may be unimportant challenges. For those data centers, the cost advantage of basic 10GbE switches and cards, as well as the native iSCSI storage systems targeted for these markets, are very compelling.

Conclusion

The evidence from both vendors and the protocol installed base indicate an overwhelming preference for Fibre Channel when connecting all-flash and flash-assisted arrays. The capabilities of Gen 5 Fibre Channel, a fabric based, lossless network that can deliver 16Gbps bandwidth is simply too compelling. As architectures become increasingly memory based, the concerns of Ethernet may be too great and the workarounds too expensive–especially at scale–to be considered by large enterprise data centers. These data centers will need to shift to Fibre Channel or consider another workaround, like server side flash, which will be the subject of an upcoming column.

Download the complete Report Here:

2-5

Brocade is a client of Storage Switzerland

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Blog
5 comments on “Is 10GbE Fast Enough For Flash Storage?
  1. Concerns about STP really don’t apply in modern large scale networks. Most of the large scale datacenter networks now use layer 3 with ECMP which allows full use of bandwidth without the problems of STP. If you need to run layer 2 for some apps, that can be done by a VXLAN overlay. For holdouts that just want a flat layer 2 network and traditional VLANs, MLAGs can be used to have full bandwidth uplinks without STP issues. So network bandwidth clearly isn’t a limitation of Ethernet.

    Also, your information about Pure is incorrect – they do support 10Gb. In fact, it’s the highest bandwidth option they support, given that the only support 8Gb FC right now 🙂

    Concerns about “protocol efficiency” and “packet loss” are largely overblown for well designed 10Gb networks, although technologies like DCB can help if you’re really concerned.

    FC has a huge installed base that is hard to ignore, and there are plenty of cultural and compatibility reasons it is maintained. However any technical arguments to support it as “superior” to Ethernet are quickly disappearing and few companies without a legacy FC base are moving to it.

    To answer your original question… the largest, fastest flash storage system on the market runs on 10Gb Ethernet today. So yes, I’d say it is fast enough.

  2. George Crump says:

    Dave, First thanks for the professional and well thought out response. Fair point about Pure Storage. Simply a mistake on my part, I’ve corrected the entry. As for Layer 2. I’d say a lot of the IP based storage networks in production today are using layer 2. That is anecdotal but I have not seen any published information to the contrary. As for MLAGs again fair point but I don’t see those in production much and to me this begins to take away from the “cheap and easy” advantage that Ethernet brags about. Again I am not saying the Ethernet is not a viable storage protocol, I am saying that in large enterprises it has complications that in my opinion are more easily handled with Fibre.

  3. Technical complexity exists on both sides. The biggest “complication” for Ethernet storage is usually that the Ethernet network is owned by a different team than the storage team. Enterprise IT silos are breaking down quickly, but this will likely be one of the last to fall.

  4. George Crump says:

    Agreed 115%! – In fact that is the subject of my next blog. But the point is fair that it can be done with Ethernet/IP but once you design a serious storage network it takes both a vendor and a customer that is focused on managing it.

  5. John F. Kim says:

    George, nice blog. I agree that in many cases 10Gb Ethernet is not fast enough for flash arrays, especially as those arrays increase performance and capacity. But 16Gb FC is not the only option. You pointed out FCoE is available now at 10GbE, and I believe it will support 40GbE in the near future. A few flash arrays already support 40Gb Ethernet for iSCSI and/or NAS, plus many already support QDR 40Gb or FDR 56Gb InfiniBand. I also expect that flash arrays will soon add support for RDMA over Converged Ethernet (RoCE), allowing storage connections over lossless Ethernet with lower latency than 16Gb Fibre Channel.

    Disclosure: This is my personal opinion, but I work for a networking vendor that sells 40Gb Ethernet and 40/56Gb InfiniBand technology.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,227 other followers

Blog Stats
  • 1,692,193 views
%d bloggers like this: