NVMe and NVMe Over Fabrics Critical To Next Generation Storage Architectures

The storage media used to be the slowest component within the storage architecture. Now, thanks to flash, it is the fastest. While the performance and low latency of flash allows data centers to make significant steps forward in application scale and response, flash also exposes other weak spots within the storage architecture. IT needs to solve those weak spots in able to fully exploit flash’s capabilities.

Understanding the Weak Spots in the Storage Architecture

The storage architecture weak spots are essentially everything that surrounds the actual flash media, essentially the storage system. The two primary areas of concern are the software that drives the storage and the network within that system. There are two aspects of the network that are particularly important. The internal network that allows the storage software to communicate with the flash drives and the external network that allows the storage system to communicate with either other nodes in the system or to the attaching hosts.

The Internal Network Problem

Storage systems, whether scale-out or scale-up, are basically servers. As servers they have a certain amount of processing power which, among other things, the storage software uses to move data to and from the flash media. For most legacy storage systems, that communication path is Serial Attached SCSI (SAS). Most flash systems today leverage 12Gbps SAS for that communication. That speed is relatively fast but the communication is still SCSI-based, a communication protocol designed to enable CPUs to communicate with rotating hard disk drives, which had a lot more latency than flash drives do.

A new storage protocol has emerged: NVMe (Non-Volatile Memory Express). It is designed specifically for low latent memory storage devices, like flash. It replaces SCSI and provides a new communications path to memory-based storage. It includes higher queue depth and command counts that take advantage of the low latency flash provides.

As is the case with any new standard it takes time for it to be adopted and become available. Flash SSD vendors were very quick to start delivering drives that supported the specification, but optimal use also requires the latest PCIe architectures to be grafted into servers. In other words, servers needed to be refreshed. The first implementation of end-to-end NVMe technology is in the latest generation of servers now coming to market.

Storage systems that will first be able to take advantage of NVMe and the performance it provides will be from storage vendors that are primarily software. They can update their software to directly support NVMe and then load their software directly on these new servers as soon as they become available. By contrast a storage vendor with a system that is less software defined and more tied to specific hardware platforms will have to wait for their hardware to be refreshed before they can fully exploit NVMe.

The External Network Problem

The second challenge that flash-based storage systems face is the external network. The external network communicates with other storage nodes in a scale-out storage system and with the physical servers that store and read data.

As scale-out storage systems scale the networking, those systems can become critical. These architectures scale by adding nodes to the cluster. As more nodes are added, the inter-node communication increases. Any overhead in the communications between nodes can be a significant issue for these systems and increase overall system latency.

NVMe is being advanced as a networking protocol, NVMe Over Fabrics (NVMe-F). NVMe-F enables very high speed and very low latency connections. They also typically use some form of remote storage access, minimizing the interaction between multiple CPUs. NVMe-F is an ideal way for scale out architectures to limit increases in latency as the number of nodes scales.

The final step in optimizing the network component of the storage architecture is the connection to the physical hosts. That connection today is typically either fibre channel or iSCSI-based. While advances in both FC and IP technologies provide the raw bandwidth flash architectures require, they still are burdened with latency and lack of efficiency of a SCSI protocol. NVMe connectivity to the host via NVMe-F will also optimize that communication path. The result should be an eventual end-to-end NVMe communication path that will enable flash to reach its full potential.

StorageSwiss Take

Moving to the next generation of flash performance for most organizations will be a multi-step process, where they will address various bottlenecks as they occur. Most organizations will be able to create enough IO to make the internal communication advantages of NVMe over SCSI a viable upgrade for storage system vendors to address almost immediately. In the same way, an NVMe connected scale-out architecture should provide a much more efficient communications path between nodes and deliver exciting new features. Again, this is something that most organizations will need to take advantage of today or in the near future.

The last step for most organizations is the move to NVMe-F connected physical servers. For now, most environments will benefit from the first two steps and an upgrade to standard network bandwidth. The good news is that NVMe-F and SCSI-based protocols can coexist on the same network.

NVMe is also another endorsement for the software defined storage vendor strategy. NVMe will require new servers, new flash drives and new network adapters. Software defined storage solutions can much more easily adapt their architectures to these new components.

Sponsored by Kaminario

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , ,
Posted in Blog
2 comments on “NVMe and NVMe Over Fabrics Critical To Next Generation Storage Architectures
  1. fstevenchalmers says:

    A key aspect of NVMe is the optimized host software stack. In rough terms (take these numbers as conceptual and not actual), a 10 microsecond path through the host stack is consistent with supporting 100,000 IO/sec, while the stack of a couple of years ago was more like 50 microseconds (20,000 IO/sec) and let’s not even talk about the Fibre Channel stacks of a decade or two ago. And yes, this is more complex than I’ve just presented it, with multicore and mutiprocessor servers.

    Having spent a couple of years on Gen-Z before I “retired”, I remain convinced that storage needs to borrow from user space supercomputer communication (a 20 year old technology, of which RDMA is perhaps the most visible piece that’s mainstreamed in commercial computing) and head for 100ns path lengths entirely in user space libraries. Doing so, of course, requires storage to go through a control plane / data plane split the way networking did decades ago…which will be “very interesting” if the industry pulls it off.

  2. […] NVMe and NVMe Over Fabrics Critical To Next Generation Storage Architectures […]

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,665 other followers

Blog Stats
  • 990,743 views
%d bloggers like this: