Why Workload Modeling and Monitoring for NVMe-oF?

When migrating to new technology, it is always important to conduct a careful cost/benefit analysis, to understand what the return on the new investment will be. It is also necessary to be sure that the investment will provide desired outcomes (such as substantially faster application performance). When it comes to non-volatile memory express (NVMe) over Fabrics (NVMe-oF), cost/benefit analyses are critically important for a few reasons. Firstly, migrating to NVMe-oF requires investment not just in more expensive storage media, but also in more expensive networking connectivity. Along a similar vein, the application performance acceleration promised by NVMe-oF is in fact far from a given. It is dependent on the entire application and infrastructure ecosystem, especially in the shared storage approach that NVMe-oF creates. And finally, NVMe-oF is still maturing in terms of its ecosystem and in terms of product availability. Before investing in NVMe-oF, it is important to have a carefully architected plan in terms of which products will be used for which workloads, to maximize return on investment (ROI) and to mitigate risk.

The first step in terms of determining if NVMe-oF is a fit, is building a workload validation practice that provides insight into how the workload will perform in a real-world context. This means going beyond performance benchmark stats to account, for example, for variable workload behaviors and other elements of the IT infrastructure that can (and will) impact the application’s performance. The problem is that owning and operating this infrastructure is expensive and time-consuming. Workload modeling solutions such as Virtual Instruments’ WorkloadWisdom provide an alternate path, by emulating how an application will perform given the unique parameters of a customer’s production environment.

It is equally important to be able to track an application’s performance and availability once deployed. NVMe-oF is typically running mission-critical workloads that cannot tolerate bottlenecks or outages. As such, this requires an infrastructure monitoring solution that provides deep granularity, with more than just a snapshot of the infrastructure’s state every three or five minutes. The tool should be able to correlate data from on-premises systems and cloud services into intelligence that is actionable in terms of the application’s performance. For example, it could reveal pending capacity limitations or “noisy neighbors” that may be bogging down another application.

Virtual Instruments provides both workload modeling and workload monitoring. Additionally, it takes a full-stack and an application-centric approach that can help to boost confidence during the transition to NVMe-oF. Access our on demand webinar, “Does Your Data Center Need NVMe?,” to learn more about establishing a workload validation practice, and how Virtual Instruments can help.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Senior Analyst, Krista Macomber produces analyst commentary and contributes to a range of client deliverables including white papers, webinars and videos for Storage Switzerland. She has a decade of experience covering all things storage, data center and cloud infrastructure, including: technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her previous experience includes leading the IT infrastructure practice of analyst firm Technology Business Research, and leading market intelligence initiatives for media company TechTarget.

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,219 views
%d bloggers like this: