As 2019 approaches, the storage industry is buzzing about non-volatile memory express (NVMe). This new protocol and interface for flash storage stands to bring substantial value to a range of applications and workloads. NVMe is emerging as a key tool to meeting the stringent latency requirements of future-forward artificial intelligence, machine learning, high-velocity analytics and NoSQL database workloads that arguably cannot get enough performance, and that are tomorrow’s business-critical applications. At the same time, the protocol enables IT to maximize CPU performance (thus reducing the need to invest in scaling out infrastructure) for traditional Tier 1 applications such as Oracle databases and Microsoft SQL server that are running businesses today. For additional discussion, visit Storage Switzerland’s recent blog, Busting the NVMe Flash Myths.
The potential upside for adopting NVMe is significant, but IT should think critically about how it plans to embrace the new protocol to maximize return on investment and outcomes for the business. Chief to be mindful of is avoiding creating additional fragmentation and new silos in the data center built around “extreme performance” storage infrastructure.
Some NVMe vendors are bringing to market proprietary componentry including field-programmable gate arrays (FPGAs) and ASICs as opposed to utilizing industry-standard PCIe cards and removable media. Proprietary components can address point, near-term application performance problems, but set the stage for longer-term problems as the business expands its range of workloads requiring NVMe-levels of performance, or as IT looks to extend NVMe performance benefits to traditional Tier 1 applications. Another factor that can contribute to new storage siloes is the fact that “extreme performance” requires specific, lowest-latency networking standards and bandwidth, limiting both internal and external network choices.
As is common when new systems come to market, many new NVMe solutions are released with a limited software stack that does not provide enterprise-class storage features, such as data management. In these instances, the application must provide these features. This ends up taking valuable processing cycles away from pure data processing (which is the objective of adopting NVMe). This pain point is exacerbated as the faster processing speeds of NVMe limit the ability of applications to hide behind the latency of hard drives, creating urgency for NVMe vendors to invest in the efficiency of the software stack.
In the near term, vendor solutions will most commonly mix NVMe and serial-attached SCSI (SAS) protocols. IT shops have a significant existing investment in SAS, and the industry is currently largely consolidated around data storage management frameworks for SAS – an area where NVMe is still maturing. Not unlike the tiered hierarchies of spinning and solid-state disk media that exist, we will see a tiering of NVMe protocols for applications that require fastest performance, and SAS for more capacity-oriented workloads.
An end-to-end NVMe approach is possible today, but IT shops should proceed with caution as standards continue to develop. That acknowledged, ongoing development of NVMe protocols, and in particular the emergence of NVMe over fabrics (NVMe-oF), bodes well for future use cases around leveraging NVMe in a shared or pooled storage architecture to drive latency down across the network – for traditional and new applications alike.
Regardless of the approach, IT should carefully evaluate cost and value tradeoffs. In a market heavily driven by performance, this “cost to value” framework should also include capabilities such as data compression and deduplication, as well as scalability capabilities and needs. Ensuring a complete software stack that addresses file, block and multi-protocol approaches as needed is also core to this framework.