The NVMe protocol allows servers or storage systems to communicate to flash storage with more optimally. This PCIe-based architecture enables larger command sets and IO queue depths resulting in lower latency and greater performance. It is a wholesale replacement for SCSI. But there is another SCSI protocol lurking in the data center. Storage networks, both FC and iSCSI, encapsulate SCSI commands to share storage in a network. NVMe Over Fabrics is the next step in shared storage and it promises to provide near in-server performance for IO operations.
In this StorageShort we discuss what exactly NVMe Over Fabrics is:
Networks, just like storage, have continued to advance, providing better communications, broader bandwidth and enhanced IO control. The problem is, like flash storage, advanced networking is encumbered by SCSI. NVMe Over Fabrics allows the network to have the same command set and queue depth that in-server NVMe has. It should allow data centers to tap into the full potential of their networks just like NVMe allows them to tap into the full potential of their flash storage.
Why Does The Data Center Need NVMe Over Fabrics?
For some data centers, today’s SCSI-based SAS flash drives and high speed networks provide more than enough performance. But for an increasing number of data centers it does not. It is reasonable to expect that as applications catch up with the new performance characteristics of flash, the number of data centers needing more performance will increase. Eventually most data centers will hit a performance wall, and since they will already be all-flash or at least mostly flash, adding more flash won’t be the answer. The next bottleneck is the network.
Before NVMe Over Fabrics, the network, just like the internal server connection to the flash drives, primarily communicates SCSI or some sort of file protocol like NFS or SMB. NVMe-F allows those networks to communicate via the NVMe protocol which means they to can take advantage of the advance command set advanced queuing.
Before we see NVMe Over Fabrics attached servers, the first NVMe Over Fabrics implementations will likely be private to a storage cluster. A NVMe network should deliver near in-server storage IO performance and latencies. This near in-server performance should allow for the creation of very fast and low latent scale-out storage infrastructures. Imagine a scale-out architecture with scale-up latencies.
NVMe is a protocol that will, and already is, seeing rapid adoption. The first iterations will be inside storage systems to help with internal IO. Then, thanks to NVMe Over Fabrics, the use case will expand to create very scalable but low latent scale-out storage architectures and then finally complete end to end NVMe connectivity.