StorageShort: Why Does The Data Center Need NVMe Over Fabrics?

The NVMe protocol allows servers or storage systems to communicate to flash storage with more optimally. This PCIe-based architecture enables larger command sets and IO queue depths resulting in lower latency and greater performance. It is a wholesale replacement for SCSI. But there is another SCSI protocol lurking in the data center. Storage networks, both FC and iSCSI, encapsulate SCSI commands to share storage in a network. NVMe Over Fabrics is the next step in shared storage and it promises to provide near in-server performance for IO operations.

In this StorageShort we discuss what exactly NVMe Over Fabrics is:

Networks, just like storage, have continued to advance, providing better communications, broader bandwidth and enhanced IO control. The problem is, like flash storage, advanced networking is encumbered by SCSI. NVMe Over Fabrics allows the network to have the same command set and queue depth that in-server NVMe has. It should allow data centers to tap into the full potential of their networks just like NVMe allows them to tap into the full potential of their flash storage.

Why Does The Data Center Need NVMe Over Fabrics?

For some data centers, today’s SCSI-based SAS flash drives and high speed networks provide more than enough performance. But for an increasing number of data centers it does not. It is reasonable to expect that as applications catch up with the new performance characteristics of flash, the number of data centers needing more performance will increase. Eventually most data centers will hit a performance wall, and since they will already be all-flash or at least mostly flash, adding more flash won’t be the answer. The next bottleneck is the network.

Watch On Demand

Before NVMe Over Fabrics, the network, just like the internal server connection to the flash drives, primarily communicates SCSI or some sort of file protocol like NFS or SMB. NVMe-F allows those networks to communicate via the NVMe protocol which means they to can take advantage of the advance command set advanced queuing.

Before we see NVMe Over Fabrics attached servers, the first NVMe Over Fabrics implementations will likely be private to a storage cluster. A NVMe network should deliver near in-server storage IO performance and latencies. This near in-server performance should allow for the creation of very fast and low latent scale-out storage infrastructures. Imagine a scale-out architecture with scale-up latencies.

StorageSwiss Take

NVMe is a protocol that will, and already is, seeing rapid adoption. The first iterations will be inside storage systems to help with internal IO. Then, thanks to NVMe Over Fabrics, the use case will expand to create very scalable but low latent scale-out storage architectures and then finally complete end to end NVMe connectivity.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , ,
Posted in StorageShort

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,514 other subscribers
Blog Stats
%d bloggers like this: