Non-volatile memory express (NVMe) is arguably one of the most important new technologies to make its way into the data center, because it is capable of delivering the ultra-fast levels of performance that are required by a growing number of mission-critical applications. However, NVMe by itself cannot save the day; it only plays one role in a complex ecosystem of factors that end up determining the workload’s performance. For example, NVMe’s ultra-low levels of latency can facilitate the rapid response times that these applications require, but network bandwidth, the architecture’s ability to scale resources independently, on demand, and facilitating parallel access, as well as processing at scale, also have a substantial impact.
To maximize the return on their investment in premium NVMe storage, storage managers must introduce the technology into the data center in a way that applications can fully take advantage of. Doing so requires storage professionals to look beyond pure performance-related benchmark statistics, which do not account for variability in workload behaviors or other determining factors across the IT infrastructure. To obtain a truer picture of how the workload will perform once deployed on an NVMe solution, storage professionals should instead consider workload modeling, which simulates workload input/output (I/O) performance patterns based on the holistic environment in which that workload is operating – without the overhead associated with spinning up the full infrastructure required to recreate that workload.
Performance truly is a function of the workload itself. It is not a function of one specific element of the IT infrastructure stack (such as NVMe storage). For example, a workload that is write-intensive may benefit the most in terms of performance acceleration from NVMe, but performance will change if the workload tips to becoming more read-intensive due to changing user behaviors. Additionally, data compression and deduplication technologies – commonly deployed alongside NVMe to increase usable disk capacity – are computationally intensive, and as a result will impact how the workload performs.
Problems can come from other unexpected places, as well. For example, how host compute nodes communicate with each other can possibly slow application performance, if the application is waiting on a resource that is running on slower storage. This is especially a concern considering increasing virtual machine sprawl. Additionally, because it is a new technology, NVMe is likely to have frequent software updates that can impact the end user experience.
For more on how to build a workload validation practice to support an informed move to NVMe, watch Storage Switzerland’s webinar in conjunction with Virtual Instruments and SANBlaze, “Does Your Data Center Need NVMe?”.