Many data centers are experiencing a performance bubble. They have more than they need. The move from high latency hard disk systems to almost zero latent flash arrays delivers dramatic performance in every case. But this bubble will burst and when it does application owners and users will come knocking on IT’s door demanding more performance. The time to plan for that day is now.
What Could Go Wrong With The All-Flash Data Center?
The move to an all-flash data center continues. But the number of workloads, and the number of users accessing those workloads is also increasing non-stop. Virtualization thus far is the catalyst for the workload increase, but as containerized environments like Docker become more commonplace, the typical organization may need to deal with thousands or hundreds of thousands of application instances being spun up within seconds.
The good news is flash media can keep pace with these increases. It also seems like Intel can keep producing more and more powerful processors, so compute will keep pace. The problem is everything in between compute and flash media.
All-flash array vendors are doing their part. Most flash vendors have initiatives underway, or almost completed, to deliver the internal networking to maintain performance under these new workload requirements. Most of the initiatives center around NVMe, which is a flash-optimized storage IO protocol that increases queue depth and command count.
The missing link is the network. While today’s 10GbE and 16Gbps Fibre Channel (FC) may seem adequate, network IO will begin to bottleneck as the number of workloads continues to increase. Most problems will initially appear on inter-switch communications. The traffic between switches will simply overwhelm current bandwidth capacity. If the organization doesn’t need them already, there will be a real demand for 32Gbps inter-switch links very soon.
As the application instances and number of users accessing those applications continue to increase, the bandwidth of the attached physical servers and the protocol they use to communicate to flash storage will be an issue. The good news is port speed is increasing across both IP and FC protocols. More importantly initiatives to network NVMe (NVMe over Fabrics) are also well under way.
Cisco Storage – Preparing the Data Center for The Future Now
The Cisco MDS 9700 Series Multilayer Directors have been at the heart of Cisco’s storage networking strategy for years. They provide enterprise class availability, scalability and flexibility. The MDS family comes in three configurations ranging from the MDS 9718, an 18-slot chassis with 16 line card slots and up to 16 power supplies, the 9710 is a 10-slot chassis with eight line card slots and eight power supplies and the 9706, a 6-slot chassis with four line card slots and up to four power supplies.
Recently Cisco added a 48 port 32-Gbps FC switching module to its offering. It works with any of the 9700 family of directors and is field upgradable. 32-Gbps is double the current 16-Gbps FC network and four times as fast as the more common 8-Gbps FC network. A 9718 fully loaded with these cards will deliver 768 ports of line-rate 32-Gbps performance.
On the host site Cisco collaborated with Broadcom/Emulex’s and Cavium/QLogic to deliver 32Gbps FC host bus adaptors designed specifically for Cisco’s incredibly popular UCS C-Series servers. The result is an end-to-end 32Gbps communication path that should be able to handle the next wave of performance demands.
Cisco also recently announced the MDS series of switches will support NVMe over Fibre Channel, which will improve protocol efficiency over the legacy SCSI transport. Cisco MDS customers can non-disruptively upgrade their existing MDS operating systems to simultaneously support both SCSI and NVMe. While most data centers won’t need NVMe Over Fabric’s efficiency today, building the protocol into the network today is the ideal way to future proof the network.
With all this IO moving at increased speeds, proactively managing the network becomes more critical than ever. In the modern data center, where thousands of users may suffer from a network outage, reactive problem resolution is no longer acceptable. IT needs to proactively make operational decisions based on the data at hand. Most storage networks are a gold mine of information, the challenge is accessing that data and sifting through all of this information to be alerted of impending problems.
Each of Cisco’s new 32G modules provides a built-in analytics engine that monitors all flows on all ports at line rate. The engine is capable of IO-level metrics that are computed in every switch. MDS customers can now analyze FC traffic exchanges in real time to report various metrics. The result is comprehensive and timely alerting of any potential performance issues.
The storage network evolves at a slower rate than the rest of the data center – it has to. But that slow evolving reality also means IT planners need to look far forward to make sure the changes they make to the infrastructure now are compatible with technologies that are years away from implementation.
The MDS line has proven itself to not only be scalable and flexible but also meet future demands. OS upgrades to add new protocol support and the addition of high performance cards with advanced analytics are two excellent points to prove the system’s adaptability.