Potential NVMe Pitfalls and What to Look For in 2019

As 2019 approaches, the storage industry is buzzing about non-volatile memory express (NVMe). This new protocol and interface for flash storage stands to bring substantial value to a range of applications and workloads. NVMe is emerging as a key tool to meeting the stringent latency requirements of future-forward artificial intelligence, machine learning, high-velocity analytics and NoSQL database workloads that arguably cannot get enough performance, and that are tomorrow’s business-critical applications. At the same time, the protocol enables IT to maximize CPU performance (thus reducing the need to invest in scaling out infrastructure) for traditional Tier 1 applications such as Oracle databases and Microsoft SQL server that are running businesses today. For additional discussion, visit Storage Switzerland’s recent blog, Busting the NVMe Flash Myths.

The potential upside for adopting NVMe is significant, but IT should think critically about how it plans to embrace the new protocol to maximize return on investment and outcomes for the business. Chief to be mindful of is avoiding creating additional fragmentation and new silos in the data center built around “extreme performance” storage infrastructure.

Some NVMe vendors are bringing to market proprietary componentry including field-programmable gate arrays (FPGAs) and ASICs as opposed to utilizing industry-standard PCIe cards and removable media. Proprietary components can address point, near-term application performance problems, but set the stage for longer-term problems as the business expands its range of workloads requiring NVMe-levels of performance, or as IT looks to extend NVMe performance benefits to traditional Tier 1 applications. Another factor that can contribute to new storage siloes is the fact that “extreme performance” requires specific, lowest-latency networking standards and bandwidth, limiting both internal and external network choices.

As is common when new systems come to market, many new NVMe solutions are released with a limited software stack that does not provide enterprise-class storage features, such as data management. In these instances, the application must provide these features. This ends up taking valuable processing cycles away from pure data processing (which is the objective of adopting NVMe). This pain point is exacerbated as the faster processing speeds of NVMe limit the ability of applications to hide behind the latency of hard drives, creating urgency for NVMe vendors to invest in the efficiency of the software stack.

StorageSwiss Take

In the near term, vendor solutions will most commonly mix NVMe and serial-attached SCSI (SAS) protocols. IT shops have a significant existing investment in SAS, and the industry is currently largely consolidated around data storage management frameworks for SAS – an area where NVMe is still maturing. Not unlike the tiered hierarchies of spinning and solid-state disk media that exist, we will see a tiering of NVMe protocols for applications that require fastest performance, and SAS for more capacity-oriented workloads.

An end-to-end NVMe approach is possible today, but IT shops should proceed with caution as standards continue to develop. That acknowledged, ongoing development of NVMe protocols, and in particular the emergence of NVMe over fabrics (NVMe-oF), bodes well for future use cases around leveraging NVMe in a shared or pooled storage architecture to drive latency down across the network – for traditional and new applications alike.

Regardless of the approach, IT should carefully evaluate cost and value tradeoffs. In a market heavily driven by performance, this “cost to value” framework should also include capabilities such as data compression and deduplication, as well as scalability capabilities and needs. Ensuring a complete software stack that addresses file, block and multi-protocol approaches as needed is also core to this framework.

Access Storage Switzerland’s webinar with Western Digital, What’s Your 2019 NVMe Strategy?, for additional discussion.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , ,
Posted in Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,973 other followers

Blog Stats
  • 1,370,181 views
%d bloggers like this: