Often overlooked, the cabling infrastructure is the circulatory system of the data center. An organization can invest in the most powerful servers, the fastest storage and the most advanced switches, but if data cannot flow smoothly between them these investments go to waste. Data Center Systems’ Structured Connectivity solutions reduce the amount of time it takes to bring new systems online, makes sure that storage investments perform at their full potential and to make moves, adds and changes without having to access active equipment.
Implementing the right connectivity architecture is also critical as the data center attempts to become more agile so it can respond to the needs of the business. Most organizations have an investment of time and resources so they can provision applications rapidly, but in many cases this agility trips and falls when a change is necessary to the cabling infrastructure. The time to perform a move, add or change can take days or even weeks.
Flash storage, either all-flash arrays or flash heavy-hybrid arrays, is key to the move toward an agile data center. But those high performance systems quickly lead toward an upgrade of the infrastructure. Many data centers are moving to 16Gbs host bus adapters (HBA) and 16Gbs switches. These faster architectures demand more exact cabling requirements. “Link loss” becomes a key concern. The problem is that poor architecture of a connectivity design won’t always stop the movement of data, it degrades the movement of data. This makes troubleshooting very difficult.
How’d We Get Here?
Most storage infrastructures start out as a simple point-to-point design that deploys specific cables and connects servers and storage directly to a switch. This method performs well but does not scale well. It also requires direct interaction with active components like switches, servers and storage.
As a SAN begins to scale and port counts reach into the hundreds, if not thousands, storage professionals need to look for an alternative. The network team seems to have the most experience. As a result many storage networks follow the same design as the Campus LAN. But the storage network is not a local area network. It needs a much more deterministic infrastructure, where there can be no packet loss. Storage also typically requires the highest bandwidth speeds available which means fiber optics and a concern over light loss.
Essentially, SAN administrators went to the wrong place for help, instead of the network team they should have worked with the mainframe team. In the early 1990’s IBM introduced the concept of a fiber optic structured cabling system to support its ESCON mainframe and other devices that utilized optical connections. The IBM Fiber Transport System (FTS) utilized a central patching location. FTS was later adopted into the Telecommunication Industry Association (TIA) standard for data center connectivity.
Data Center Systems Structures Connectivity
Data Center Systems leverages a unique blend of data center best practices and purpose built products to deliver a structured cabling design that allows systems to perform at their full potential while also allowing IT to rapidly respond to the needs of the business. With this approach IT personnel never have to manipulate active equipment, like director class switches, unless a hardware change is necessary. As one example, Data Center Systems designs unique patch panels to match port numbers on the director switches, enabling easier documentation of all port connections.
The Topology of the Infrastructure
The Data Center Systems’ design is a topology that connects an organization’s director class switches to a central patching location (CPL). The director switches are connected to a patch panel at the CPL which mimics the actual ports on the switch. For example, Brocade Directors start with port numbering at 0 but Cisco Directors start their port numbering at 1. Data Center Systems has panels to mimic either configuration.
Zones, a customer defined area of endpoint devices (storage and servers), are then trunked to the CPL. As a result, all MACs moves, adds and changes, can all be done at the CPL, minimizing the amount of interaction with active equipment. This mitigates risk unintended downtime.
The result is that all switches, servers and storage throughout the data center are represented by individual ports on the front of the patch panels in the CPL. Connecting two devices is accomplished by a simple jumper cable connection on the front side of the patch panels at the CPL, allowing for instant device to device connectivity.
This design is often done without the use of MTP Cassettes. In our article “The Criticality of Cabling Infrastructure in High Performance Storage Networking”, we explain why these devices often lead to link loss. That can impact performance of high-speed devices like all-flash and hybrid arrays. The structured design provides industry leading dB retention (instead of loss) that ensures high availability and low latency.
The structured approach to connectivity allows for rapid connection of switches to servers and storage, allowing the data center to not trip over the cable infrastructure on its way to agility. In this design form meets function, delivering a high level of density without sacrificing manageability. Data Center Systems designs, builds, installs and manages these infrastructures to meet the demands of the data center right now while preparing its customers a smooth transition to next generation technologies.
Sponsored by Data Center Systems