NVMe is a storage protocol designed specifically for flash-based storage. It is PCIe-based and provides more commands per IO queue and more IO queues. The NVMe over Fabrics (NVMe-F) extension gives IP and Fibre Channel (FC) networks the ability to take advantage of NVMe’s higher command count and queue depth to exploit fully, memory-based storage. As discussed in our blog “What is Scale-Out NVMe?“, NVMe-F will first be used to create a more scalable storage architecture, but eventually, it will work its way into servers and switches, building an end-to-end NVMe architecture.
Why End-to-End NVMe?
For workloads that need extremely high performance and very low latency, one of the key design decisions is where to place storage physically. If the IT planner decides on shared storage architecture, the environment gains all the benefits of shared storage like data protection, better availability, capacity efficiency, and scale. But, the shared storage environment does introduce latency, especially when compared to storage internal to the server on which the workload is running. If the IT planner chooses internal server storage, it eliminates the latency concern, especially with NVMe-based storage, but it introduces complexities in trying to provide the capabilities that shared storage systems have built into them.
NVMe-F enables the storage network to deliver performance and latencies very similar to internal storage. As a result, the IT planner can have the best of both worlds, very fast, low latency storage with all the data protection, data efficiency and high availability features for which shared storage is known.
End-To-End NVMe Requirements
End-to-end NVMe requires several components. First, it needs a storage system that has not only internal NVMe connections, which is becoming more common, but also external NVMe connections, which most storage systems do not yet have.
Second, network switches, be they fibre channel or IP based, will also require NVMe support and the two major FC storage infrastructure providers are providing that support now. Any IP Ethernet switch with RDMA support will support NVMe. It is important from an IT perspective to make sure that the switch infrastructure will support both NVMe and legacy SCSI (or iSCSI) protocols since most environments will not switch to NVMe-F all at once.
Third, the servers that are going to connect to the storage system via NVMe will need to have an NVMe capable network card in them. Again, in the IP instance, most of the Converged Network Adapters (CNAs) have this capability today. NVMe Ready FC adapters may require a firmware update, but that firmware is also now available.
Finally, once all these requirements have been fulfilled, the IT planner will want to create an NVMe-F only path from the NVMe server through the switch and to the storage. Mixing SCSI and NVMe-F on the same logical network path may force the network to treat all traffic with the lowest common denominator (SCSI) which will hinder performance. In fact, networks may require that NVMe traffic be on its own logical path.
The reality is that most data centers won’t need end-to-end NVMe for several years. For once, workloads and design practices need to catch up to the capabilities of the architecture, instead of the other way around. An end-to-end NVMe architecture will enable data centers to rethink how far they push their virtualized and containerized environments or how scalable their transaction-oriented databases can become.
CPU processing power has always been way ahead of the network’s and storage system’s ability to feed it information. NVMe allows both to catch up and IT planners can expect to make their CPUs work harder than ever. NVMe, both now with storage system based NVMe and in the future with end-to-end NVMe, will drive down the cost of IT because the organization will finally have the ability to maximize its CPU investment.
Sponsored by Tegile