The Criticality of Cabling Infrastructure in High Performance Storage Networking

Pushed by initiatives like high density virtualization, online database applications and low latency flash storage, fibre channel (FC) storage networking is entering a new era sooner than expected. These initiatives are forcing the move to 16Gb FC networking at a much faster pace than prior generations of the technology. The challenge is that there is already a silent performance killer inside the data center, the cabling infrastructure. While lower link data rates such as 1,2, or 4Gbps might be unaffected by the cable plant, higher rates of 8 and especially 16Gbps are highly sensitive to it. As a result, the impact of poorly architected connectivity solution will be more devastating.

In addition, storage networking infrastructure and practices grew largely out of the Campus LAN theory of design rather than a data center / mainframe theory of design. The problem is that storage traffic has very little in common with a LAN network designed for messaging transfers. Storage environments are also much more dynamic with on-going moves, adds, and changes vs. the fairly static nature of the Campus LAN. What makes the cabling infrastructure such an insidious problem is that a cable plant that lacks attention to industry specification to minimize connector reflection and maintain link loss budgets doesn’t always stop all storage I/O, but instead, slows it down, often one segment at a time.

The result of these factors is that when a single application exhibits unpredictable performance results, the situation is very hard to troubleshoot. It is also difficult to proactively troubleshoot through monitoring, since the cabling infrastructure can appear fine until stressed with traffic.

The best practice is to make sure that the cabling infrastructure is implemented correctly the first time, based on standards for data center environments, instead of having to identify the problem as it intermittently deteriorates performance or worse halts performance completely with an infrastructure more appropriate for campus LAN.

Storage Managers Pay Attention

Another challenge to performance caused by a cabling infrastructure which has insufficient design, is that those most impacted by the problem, the storage team, are the furthest removed from it. While they may have the tools to monitor application, network and storage performance, they seldom have means to monitor cable connection quality. Often the implementation and monitoring of the cable infrastructure is the responsibility of the network team or data center logistics team. And they often look at cabling as an ‘on or off’ problem. It is either connected or it is not, there is seldom consideration of an intermittent problem.

Connectivity Must Get Top of Mind

16Gbps FC networks like those from Brocade have the potential to provide the bandwidth to drive flash based or assisted storage arrays to their maximum potential. This means a tremendous return on investment (ROI) opportunity for the data center. New levels of ROI can be derived from increasing the number of virtual machines per physical host and from databases that can support 1,000 or more simultaneous users. But to achieve this ROI the cabling infrastructure can’t be considered a secondary issue. If problems with that infrastructure cause the 16Gbps investment to unpredictably perform at 4Gbps or worse, not at all, then that investment is wasted.

The Source of the Problem?

Because of the performance requirements of next generation storage architectures, like 16Gbps FC, most data in these environments is transmitted over an optical connection. This means that maintaining light quality is critical to making sure that performance lives up to expectations. Data center cable infrastructures designed from a legacy campus-LAN mentality, instead of an enterprise data center mentality often leads to the use of a cable architecture that counts on MTP cassettes instead a direct LC to LC connections. Both MTP and LC to LC connections are competing infrastructure designs that are defined and compared throughout the rest of this paper.

The more capable the network, the more important minimizing light loss becomes. As mentioned above, an environment that supports 4Gbps or even 8Gbps today could be experiencing no performance problems at all, even though the cabling may be sub-par. But when that data center upgrades to 16Gbs FC, they may not see any more performance than what was available to them under the old infrastructure. This is because the faster network is more sensitive to light loss.

For a detailed understanding of the potential long term challenges with MTP cassettes and why LC based connections have an advantage as infrastructures move to their next generation, fill out the form below for the complete white paper.

MTP Cassette Considerations

What Are MTP Cassettes?

MTP stands for “Multifiber Termination Push-on”, a connector that is designed by USConec and built around the MT ferrule. Each MTP contains 12 fibers or 6 duplex channels in a connector. MTPs were designed as a high-performance version of Multifiber Push-On (“MPO”) fiber trunks and will interconnect with MPO connectors.

Why Are MTP Cassettes Used?

For a period of time many infrastructure vendors would present a modified version of the fibre channel roadmap that suggested that single lane serial connections were going to give way to parallel optics. What these vendors have been unable to reconcile is that the FCIA roadmap shows a continued course of single lane optics through the next generation (32gbs) of Fibre (at least). In fact, conventional wisdom, which Storage Switzerland agrees with, is that fibre channel will remain single lane, serial at least for another three generations (128gbs).

Why MTP Cassettes present long term challenges

MTP cassettes have inherently higher rates of dB loss in comparison to LC based connections. This presents a significant challenge when trying to adhere to industry specifications for allowable light loss at distance in most large, single purpose data centers. This is also a problem when attempting to implement a true structured connectivity solution designed to lower operational expenses associated with all MAC activity and to optimize active infrastructure performance while easing the labor intensive tasks of documenting and managing all port connections.

Another challenge associated with the use of MTP cassettes comes from the fact that customers are essentially “wasting” one third of their fiber investment as only the outer 8 fibers are used leaving the middle 4 dark. In an attempt to utilize the dark or un-used fibers, some practitioners choose to implement a conversion assembly. While this creates three 8-fiber links, it also adds another high insertion loss point and more expense that will ultimately be wasted as link loss budgets continue to contract while bandwidth requirements grow.

Can Bi-directional QSFP Help?

Quad Small Form-Factor Pluggable (QSFP) may be a logical step for other switch manufacturers to pursue. While a complete product detail is beyond the scope of this paper the technology essentially disputes the long communicated logic that MTP connections are essential for the adoption of next generation technologies that were to be parallel optic in nature instead of serial. With the Fibre Channel Industry Association (FCIA) roadmap clearly communicating that 16 and 32 GFC will remain a single lane serial connection, the logic, expense, and high light loss of MTPs is highly questionable in doing what is right for the protection of an organization’s investment in data center infrastructure.

In regard to 40 Gbps Ethernet, the bi-directional QSFP uses MMFiber with LC connectors and transmits and receives signal at different frequencies over distances of up to 100 meters. In this scenario, maximum connector loss is just 1.0 dB of loss making a well thought out connectivity solution essential.

MTP Expense

In terms of expense, MTP cassettes are traditionally higher in cost vs. LC connectors. Once again, over the long term they will run up against performance limitations at distance that will likely render them obsolete.

Troubleshooting

In the event there is a problem with one of the six channels within an MTP cassette, it requires taking all six channels down to address vs. being able to isolate the problem as is the case with LC connectors. This presents issues related to both planned and unplanned down time.

Is the Solution LC?

What Are LC Connectors?

LC connectors were developed by Lucent, nicknamed LC connector as an acronym for Lucent Connector. They are a small form-factor, high performance connector especially designed for single mode applications. That said LC connectors certainly have application for multi-mode applications. They also have a lower light loss per mated pair than MTP connectors (DCS manufactures to a factory terminated specification of just .15 dB of loss per mated pair using MM fiber. TIA specifications call for .75 per mated pair). LC connectors were developed to meet the growing demand for small, high-density fiber optic connectivity on equipment bays, in distribution panels and on wall plates.

The LC Connector Advantage

As stated above, LC connectors have a lower insertion loss and a higher return loss (low amounts of reflection at the interface) and therefore conduct light more effectively than MTP/MPO connectors do. In either technology (LC or MTP) when mating a pair of connectors, the glass actually protrudes from the end face of the connector, physically making contact with the end face of another connector. In reality the connection calls for aligning that transfer of light perfectly between strands of material as thin as a human hair. With MTP connectors, there are 12 points of connection which need to be aligned with those fiber ends normally forming a radius (the middle being higher than the outside edges). It is difficult to consistently polish the end face of each strand and when making the connection, the higher fiber ends of MTP give under pressure and are often damaged, reducing connection quality.

Again, LC connectors have a lower light loss per mated pair. A topology that incorporates LC connectors offers lower end-to-end light loss with flexibility to migrate to next generation technologies while protecting the infrastructure investment. This topology also creates a lower latency environment for application delivery. This becomes increasingly important as organizations introduce more bandwidth intensive applications which also tend to be mission critical in nature. Finally, 100% of the fiber networking that an organization invests in is utilized, no strands are left dark. This means that there is no money wasted on connectors that become obsoleted as the next generation, low link loss technology is introduced.

Benefits of a Structured, End-to-End LC Based Solution

The introduction of an end to end, structured connectivity solution are numerous. The concept was conceived in 1990 shortly after IBM introduced the first fibre attached mainframe. Coined a “Fiber Transport System” (FTS), the concept driven by IBM was adopted into TIA-942 Data Center Standards under what is now section 7.5.1 stating that every port on every active device be represented by a port on the front side of a panel at a Central Patching Location (CPL).

Data Center Systems (DCS) provides a structured connectivity solution that differs from a typical under-floor, direct connect methodology. These structured systems introduce a Central Patching Location (CPL) with DCS “Mimic Panels” offering a mirror image of switch ports. This system minimizes the number of fiber-optic cables under a data center’s raised floor, providing scale at the CPL. This scale comes from running short jumper cables to connect additional peripherals in a given zone and then mirroring those changes at the CPL patch panels. The use of LC connectors in their topology design ensures low-loss, end to end connectivity that provides scale, low latency and low risk. Additional trunks may be run upon initial installation to support growth of peripheral devices with simple jumper to zone patching with jumper patching at the CPL.

Conclusion

The storage network infrastructure needs to respond to ever increasing I/O demand. This demand is created by a server infrastructure that can finally generate more demand than the underlying storage architecture can respond with. At the same time, the storage media, traditionally hard drives, are rapidly being upgraded to flash based storage. For the first time in the data center, its top tier (compute) can generate massive amounts of I/O and the bottom tier (storage) has the ability to respond, thanks to memory based storage. Stuck in between is the network infrastructure, but upgrading the switches and HBA cards is not enough. It is critical that IT planners address the underlying cable infrastructure in parallel to upgrading switches and server adapters.

Sponsored by Data Center Systems

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,096 other followers

Blog Stats
  • 1,465,036 views
%d bloggers like this: