NVM Express (NVMe) is an optimized, high performance, scalable host controller interface with a streamlined register interface and larger command set designed for accessing memory-based storage attached through the PCIe bus. Put simply’ it is a better protocol for flash based-drives than the one it is replacing, SCSI.
The reason for NVMe is that the SCSI protocol adds too much latency and does not allow for flash to reach its optimum level of performance. The significant difference is the chosen interface and the number of simultaneous commands that can be sent to the drive. NVMe supports 64,000 queues, up from the one queue the legacy Advanced Host Controller Interface (AHCI) supports. Also, each NVMe queue can support 64,000 commands in each of those queues, up from the 32 commands AHCI supports in its one queue.
In the hard drive era, SCSI’s limited number of queues and commands were less of an issue, since the rotational latency of the drive would not allow it to support increase queues or commands. But flash systems, with no rotational latency, can respond instantly to IO operations, so the more commands that can be sent to the devices at the same time the better performance will be.
Where Would We Be Without NVMe?
NVMe is an industry standard maintained by the Storage Networking Industry Association (SNIA). Without NVMe, the industry was well on a path to proprietary storage protocols to get around the limitations of SCSI. Before the NVMe standard, several vendors had come to market with their own proprietary storage protocols designed to maximize flash performance. The data center was heading to a world of silo’ed flash storage hardware for specific operating systems. NVMe is a universal standard and is available across almost all operating systems and offered by virtually all vendors.
Do We Need NVMe?
The big question for IT is, will NVMe actually make a difference in their data centers? The next two blogs in the sequence will detail how NVMe will manifest itself in storage systems and in the infrastructure, but essentially NVMe will show up in two ways. First, storage system manufacturers will use the technology inside of their storage systems. Then, later, the network itself will be NVMe-based thanks to an initiative called NVMe over Fabric.
Most data centers will see an immediate performance improvement by moving to all-flash arrays that support NVMe internally. The systems will be able to support more total workloads, a greater variety of workloads and specific workloads that can leverage lower latent response.
Systems with internal NVMe drives should allow customers to save money. Data centers will seldom need to add additional storage systems for performance reasons, and storage systems manufacturers, if they take the right approach should be able to scale capacity further than ever.
In the end, the goal of all-flash and especially NVMe based flash systems is to make the CPU work harder. More workloads per storage system also mean more workloads per physical host. Virtualization environments supported by an NVMe flash array should be able to support dozens, maybe hundreds, of VMs per physical host. Database applications based on Oracle or MS-SQL should be able to scale much further per physical host, lowering licensing costs which are based on the number of CPU cores in use.
NVMe is a standard that is desperately needed to advance the state of memory-based storage. It’s not just for flash and will become even more critical as next-generation storage memory solutions become critical. Almost every data center should see a performance benefit and cost reductions from the introduction of NVMe-based systems into their environment. The good news is everything is automatic, nothing changes on the application side; the world just becomes more responsive.
Sponsored by Tegile