In a recent Storage Switzerland report (commissioned by Brocade and now available for download), we point out that there is a gap between high performance compute and high performance storage — the storage network. Workloads like server and desktop virtualization, high frequency trading and scale-up mission critical databases are demanding more I/O than ever. The storage media, thanks to flash, is ready to respond. Gen 5 Fibre Channel is also ready. The question is, are traditional IP networks based on NFS and iSCSI ready to handle this performance demand?
Most IP networks will upgrade to 10GbE eventually. In fact chances are they will do so sooner than most 8Gbps Fibre Channel networks. Part of the reason for this is how inexpensive 10GbE connectivity has become. The other reason is that many IP based storage networks are a bonding or trunking of multiple 1GbE ports, making the situation more desperate. But customers looking to upgrade to 10GbE–and even those that have already done so–need to consider whether 10GbE is fast enough for flash based storage, and especially all-flash arrays.
The Market Issues
From a market perspective, the overwhelming majority of all-flash systems are being connected to Fibre Channel. Most vendors won’t publicly report their numbers but in a chalk talk video that we did with Violin Memory Systems, they stated that 90% of their systems are connecting to Fibre Channel environments. In fact, that video was summarizing a recent test we did with Violin to verify a 2 Million IOPS plus benchmark result and they specifically chose Gen 5 Fibre Channel to get the job done.
In fairness, some of these vendors like Pure Storage (edit:actually Pure does have iSCSI) and IBM (Formally Texas Memory) don’t have an IP/iSCSI option. But there is a reason for that. These vendors chose Fibre Channel because they knew they needed a predictable, high performance infrastructure for connectivity. Gen 5 Fibre Channel, thanks to its 16Gbps bandwidth and lossless architecture, provides that.
The IP Efficiency Issues
There are some technological reasons for not choosing IP as well, some of which we point out in the report. The primary issue is that IP is not a lossless architecture. Due to IP transmission re-tries, it is hard for IP to provide predictable performance in highly performance sensitive environments. As we discuss in the report, there are also some protocol efficiency concerns with NFS and iSCSI as they have to manage TCP traffic and perform various protocol conversions. In a hard drive based storage world which was full of latency, these areas of inefficiency were probably less noticeable. In a memory based storage infrastructure, however, that could be an area of top concern.
Another issue is scalability. As an IP network grows, the issues associated with Network Layers and Spanning Tree Protocol (STP) become a challenge. STP is an aging network standard created when networks were connected via simple hubs instead of the switches Ethernet networks use today. The purpose of STP is to make sure that there are no loops in the network which would put it into an endless ring and would cause it to hang. To do this, STP makes sure that there is only a single active path to each network device. It does this by shutting down any alternative paths.
The problem is that a loop-free environment wastes about 50% or more of the available network bandwidth. With very inexpensive 1GbE networks that was less of a concern, but with the emergence of 10GbE, the amount of bandwidth per deactivated link being wasted is more significant and costly.
Ethernet Networks Have A Role in Storage
Just like any other technological shortcoming, the issues with Ethernet can be worked around. TCP and translation overhead can be off-loaded to specialized network cards with processors designed to take on this load. Ethernet Fabric technology leveraging TRILL (Transparent Interconnection of Lots of Links) is beginning to take hold in larger data centers that are fully committed to IP. There is also Fibre Channel over Ethernet (FCoE) which provides a Fibre Channel like experience to Ethernet topologies. Lastly, there is hope in some of the software defined networking solutions which integrate directly into the storage system to allow for more optimal traffic handling.
For mid-sized data centers, the performance concerns caused by IP overhead and scalability issues caused by STP may be unimportant challenges. For those data centers, the cost advantage of basic 10GbE switches and cards, as well as the native iSCSI storage systems targeted for these markets, are very compelling.
The evidence from both vendors and the protocol installed base indicate an overwhelming preference for Fibre Channel when connecting all-flash and flash-assisted arrays. The capabilities of Gen 5 Fibre Channel, a fabric based, lossless network that can deliver 16Gbps bandwidth is simply too compelling. As architectures become increasingly memory based, the concerns of Ethernet may be too great and the workarounds too expensive–especially at scale–to be considered by large enterprise data centers. These data centers will need to shift to Fibre Channel or consider another workaround, like server side flash, which will be the subject of an upcoming column.
Download the complete Report Here:
Brocade is a client of Storage Switzerland