Hybrid Flash Arrays and All-Flash Arrays are fundamentally changing the way IT designs storage architectures. IT can create virtual environments with much higher virtual machine to physical server ratios, scale-up databases to support exponentially more users and add new workloads such as Artificial Intelligence and Machine Learning with the promise to give organizations new insight. Each of these use cases takes full advantage of flash performance, and combined they promise to push current flash technology to the breaking point. Yet, the problem is not the flash media but the ecosystem of components that surround it.
One of the more significant performance problem areas is the interface that the storage system uses to communicate with the flash media. Typically, that interface is SCSI-based SATA or SAS and while both protocols deliver respectable bandwidth, they do come from an era of hard disk drives and lack the raw IO performance of which flash can take full advantage. They also lack the command and queue depth needed to keep a flash drive busy.
NVMe is the next step in storage IO protocols. It was designed from the outset to support flash media and it also leverages the PCIe bus interface. NVMe has a significantly higher command count and queue depth. In addition, it also has a networking component to it, NVMe over Fabrics (NVMe-oF or NVMf), which means that IT can now build a shared storage infrastructure with similar performance and latency as direct attached storage.
The first generation of NVMe focuses on the internals of the storage system. Step one is to use NVMe drives instead of SAS or SATA drives. For vendors this means creating a set of internals that support NVMe connectivity both inside the main system and in between the main system and storage shelves. The second step is to make sure the software and storage system CPUs can keep up with the IO potential of the NVMe-based drives. The CPUs are likely to be multi-core so the software needs to be multi-threaded to achieve the best performance.
It is practical that first generation NVMe systems will not leverage NVMe-oF. There is still work to be done stabilizing NVMe as a network protocol and complete plug-and-play capabilities are lacking at this point. Today most NVMe flash arrays will use standard, high performance Fibre Channel or Ethernet connectivity, which for most data centers is more than what is needed.
The Use Cases for NVMe
NVMe Flash Arrays don’t need to wait on a whole new set of use cases. The same workloads that benefit from a standard all-flash array will also benefit from an NVMe Flash Array. These workloads, when coupled with an NVMe all flash array can support even denser virtual machine workloads and scale user count even higher.
Where NVMe Flash Arrays really shine though is when they are used for more modern workloads such as high velocity analytics, artificial intelligence, and machine/deep learning. These workloads demand the highest levels of performance and NVMe Flash is able to deliver it.
What to Watch Out For in NVMe All-Flash Arrays
As is typically the case when a new storage paradigm comes to market, vendors are delivering new products that may or may not be of interest to customers. IT needs to focus on solving the problem at hand; not buying a system that is niched down to a specific use case. While there are systems that offer millions of IOPS most data centers don’t have the infrastructure or the workloads to drive that level of performance. Also, the extreme IOPS systems tend to sacrifice capabilities. As a result the organization only buys these systems for a few specific use cases and is forced to support multiple storage software interfaces.
What to Look for in NVMe All-Flash Arrays
A customer should not have to sacrifice capabilities to take advantage of the performance of an NVMe Flash Array. The data services that come with the system should provide both file and block access and a full complement of capabilities such as thin provisioning, snapshots, replication, and clones. IT should also enable the organization to reduce their storage footprint by enabling data efficiency services such as data deduplication and compression.
A Plan for 2019
For most organizations an NVMe Flash Array is the next logical choice in storage system progression. An NVMe Flash Array enables the organization to improve performance substantially over prior generations with no other changes to the environments. During the NVMe upgrade the organization should examine its network capabilities to make sure that it can adequately sustain data rates to and from the NVMe Flash Array. For most organizations a refresh to NVMe will be all they need in 2019, with the exception of a potential network upgrade.
As organizations plan beyond 2019, they do need to start considering an NVMe network, either fiber channel or Ethernet, to take full advantage of NVMe connectivity. As NVMe-oF fully develops, the battle between shared storage networks and direct attached storage will come to an end as there will be almost no latency difference between the two.
NVMe is more than just a roadmap technology. It is available now. IT though, needs to carefully consider what it expects from NVMe versus what it can deliver. NVMe Arrays typically have the potential to deliver far more performance that the data center will be able to generate for years to come. Buying the performance required today, combined with the enterprise class features that IT has come to count on, is likely more important that purchasing a system that delivers far more than just the required IOPS.
Sponsored by Western Digital