Storage, the Bump in the 16Gb Road
16 Gbps (Gen 5) Fibre Channel (FC) has been available for several years and it is an ideal network choice for storage systems that count on flash to meet the performance demands of the data center. The problem is that most of these storage systems are running 8 Gbps FC, making it more difficult to tap into the full potential of flash storage. The lack of Gen 5 on the storage array itself leads to a more complex storage network design and consumption of more FC switch ports. A clean end-to-end connection provides a more simple, high-speed connection from the host to the storage system and promises to allow data centers to increase virtual machine density and overall storage response time.
Working Around The Storage Speed Bump
Prior to putting Gen 5 on the array most arrays counted on multiple 8Gb FC cards to connect into the storage network even if the rest of that network was already leveraging Gen 5 host bus adapters (HBA) and switches. To keep up, multiple 8Gbps cards in the hosts were trunked to aggregate bandwidth. While the FC protocol is highly efficient in its use of multiple segments, trunking is not as efficient as a single card running at maximum performance. Additionally the use of multiple 8Gb connections consumed twice as many switch ports. While this configuration was acceptable for many data centers, those looking to take full advantage of Gen 5 technology found it wanting, and were often forced to look to server-side technologies or back off on their plans for increasing the number of virtual machines per physical host.
Learn more about achieving maximum VM density by watching our on-demand webinar “Maximum VM Density Requires Optimal Storage Networking and Operational Transparency”
The Value of End to End Gen 5
The value of an end-to-end Gen 5 design is realized when an all-flash or flash heavy hybrid array is the storage end-point. With an end-to-end configuration the storage network can perform at such a level that the difference in local vs. networked flash performance will be unnoticeable to most data centers. That means that these data centers can now benefit from the classic advantages that shared storage brings like efficient use of capacity, improved support of clustered environments like VMware, Oracle and Hyper-V, as well as improved data availability and protection from storage system level snapshots and replication. The result is environments can scale further “up” and/or “out” allowing for more virtual machines per host, and more users per database host, greatly reducing data center costs and floor space requirements.
Available Now – End to End Gen 5 Networking
While Emulex and QLogic battle it out for host connectivity dominance, Brocade and CISCO battle for switch supremacy, QLogic seems to be the company capturing plenty of the storage array design wins. In recent months we’ve seen them rack up OEM contracts with SolidFire, Violin Memory, EMC VNX, NEC and Huawei. These wins include a combination of both scale-out and scale-up storage designs.
QLogic Gen 5 FC solutions are designed to tackle high-bandwidth, I/O-intensive applications where reliability is critical. They reduce throughput bottlenecks from host to storage, giving users unprecedented application performance and optimum I/O. All QLogic Gen 5 FC solutions are backward-compatible with 8Gb and 4Gb FC networks, providing investment protection for existing FC SAN infrastructures. In fact even if the storage system is the first Gen 5 component implemented the user should experience improved performance since the storage HBA itself is designed to process more data more quickly, even on an 8Gb network.
An end-to-end Gen 5 architecture gives FC an advantage over IP in data centers that are looking for optimal bandwidth. A quick survey of storage manufacturers showed that none were prepared to move past multiple 10GbE ports for the iSCSI attached storage systems, while, as shown above, many are moving to Gen 5 FC. Raw bandwidth should not be the only judge for performance, capabilities like efficiency of link usage and quality of service controls should also be considered. A 10GbE IP SAN is suitable for many data centers, but for data centers looking for maximum performance with minimal latency as well as network efficiency and quality of service controls, an end-to-end Gen 5 network infrastructure deserves strong consideration.