One of the advantages of the NVMe Flash standard is that it is networkable. NVMe over Fabrics (NVMf) brings direct attach storage like latency to shared storage systems. NVMf eliminates the need for modern applications like Hadoop and others to attempt to reduce latency by using direct-attached storage with replication. NVMf however, enables organizations to realize the latency reduction of NVMe without losing the benefits and efficiencies of a shared storage system. The promise of NVMf is so great that some vendors suggest that an end-to-end NVMe infrastructure is a requirement for moving to NVMe.
Requirements for NVMf
For an organization to create an end-to-end NVMe infrastructure it needs an NVMe host bus adapter (HBA) in each server, an NVMe ready network switch and a storage system with not only NVMe flash drives, but also NVMe host bus adapters. There are three problems with these requirements today. First, they are expensive. Most NVMe ready products carry a premium price. Second, while the switch vendors have done an excellent job of providing simultaneous access to legacy protocols and NVMf, it is a different environment with different HBAs, so there is a new learning curve. Third, there is a limited degree of compatibility between vendors. While working configurations are possible, the customer is generally very restricted as to which HBAs they can use with each operating system.
Is NVMf Worth it Today?
NVMf is the protocol of choice for the future, but for today, IT planners should proceed with caution, especially given the above concerns, and given the reality that most organizations only use a small potential of their current network’s capabilities. There is also room to grow the current network’s capabilities as higher bandwidth Ethernet and Fibre Channel network switches and HBAs are now available. While bandwidth doesn’t always help latency, it still improves performance as more servers, and virtual machines can send more I/O across the same network. The challenge with the ever-improving network and the increasing server computing power is that I/O eventually bottlenecks at the storage system.
NVMe Flash Arrays
What’s needed first for most organizations is a high-performance all-flash array that is native NVMe throughout. This type of NVMe system solves the bottleneck of many servers and virtual machines all sending I/O to a single storage system. It also enables the organization to use existing network infrastructure and networking protocols. Additionally, it enables a gradual transition to NVMe that allows application and workloads to run without disruption.
Vendors providing these systems need to make sure their storage software takes full advantage of NVMe, and they likely need to tweak that software to better integrate with NVMe. They also need to provide more powerful processors in their storage systems and make sure their storage software can take advantage of those processors so that I/O can move quickly through the storage system. Finally, they should continue to provide the enterprise features that users have come to count on like SAN/NAS support, snapshots, replication, deduplication, and compression.
Conclusion
There is little doubt that over time most data centers will move to NVMf but that transition needs to be gradual and can’t disrupt current applications and workloads. During that transition, IT does need to address performance bottlenecks and more than likely an NVMe Flash Array solves most of those issues.
In the 2nd paragraph, there are two mentions of “Host Bus Adapter”. I am fighting a battle to get folks to use say, “Target Adapters” for storage systems and not “Host Bus Adapters”. One group says, “Host Bus Adapter”; one says, “Networking Head Adapter; One says, “the term Target Adapter does not exist in the industry” (that’s the web designer)
Hi Rich, fair comment and fair request. We will help you in your battle. I’ll make the changes later today. Thanks for reading. -George