Will Your Workload Really Benefit from NVMe?

Non-volatile memory express (NVMe) is arguably one of the most important new technologies to make its way into the data center, because it is capable of delivering the ultra-fast levels of performance that are required by a growing number of mission-critical applications. However, NVMe by itself cannot save the day; it only plays one role in a complex ecosystem of factors that end up determining the workload’s performance. For example, NVMe’s ultra-low levels of latency can facilitate the rapid response times that these applications require, but network bandwidth, the architecture’s ability to scale resources independently, on demand, and facilitating parallel access, as well as processing at scale, also have a substantial impact.

To maximize the return on their investment in premium NVMe storage, storage managers must introduce the technology into the data center in a way that applications can fully take advantage of. Doing so requires storage professionals to look beyond pure performance-related benchmark statistics, which do not account for variability in workload behaviors or other determining factors across the IT infrastructure. To obtain a truer picture of how the workload will perform once deployed on an NVMe solution, storage professionals should instead consider workload modeling, which simulates workload input/output (I/O) performance patterns based on the holistic environment in which that workload is operating – without the overhead associated with spinning up the full infrastructure required to recreate that workload.

Performance truly is a function of the workload itself. It is not a function of one specific element of the IT infrastructure stack (such as NVMe storage). For example, a workload that is write-intensive may benefit the most in terms of performance acceleration from NVMe, but performance will change if the workload tips to becoming more read-intensive due to changing user behaviors. Additionally, data compression and deduplication technologies – commonly deployed alongside NVMe to increase usable disk capacity – are computationally intensive, and as a result will impact how the workload performs.

Problems can come from other unexpected places, as well. For example, how host compute nodes communicate with each other can possibly slow application performance, if the application is waiting on a resource that is running on slower storage. This is especially a concern considering increasing virtual machine sprawl. Additionally, because it is a new technology, NVMe is likely to have frequent software updates that can impact the end user experience.

For more on how to build a workload validation practice to support an informed move to NVMe, watch Storage Switzerland’s webinar in conjunction with Virtual Instruments and SANBlaze, “Does Your Data Center Need NVMe?”.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Senior Analyst, Krista Macomber produces analyst commentary and contributes to a range of client deliverables including white papers, webinars and videos for Storage Switzerland. She has a decade of experience covering all things storage, data center and cloud infrastructure, including: technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her previous experience includes leading the IT infrastructure practice of analyst firm Technology Business Research, and leading market intelligence initiatives for media company TechTarget.

Tagged with: , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,228 other followers

Blog Stats
  • 1,543,195 views
%d bloggers like this: