Most businesses’ workload ecosystems are in a state of transition. There is plenty of buzz about modern workloads such as artificial intelligence (AI), machine learning (ML), high-velocity analytics and NoSQL databases as new tools to drive competitive advantage. The reality, though, is that these workloads must coexist with the more traditional databases and Tier 1 applications, such as Microsoft SQL Server and Oracle, that will remain fixtures in the data centers of tomorrow.
Serving today’s growing and diverse set of workloads makes purchasing the right storage solution a difficult decision. IT needs to balance the trade offs between performance, capacity and cost. At the outset, many IT shops tend to believe that at the most expensive and top tier NVMe-enabled performance is required exclusively by the future-forward workload set, because these workloads are so performance hungry. And indeed, NVMe stands to bring tremendous value to these workloads as faster data extraction, transformation and analytics translate into enhanced insights for the business in a much shorter period of time.
At the same time, more traditional workloads should also not be excluded from the NVMe conversation. The low latency, high-transactional IOPS and the high throughput facilitated by high-performance storage such as NVMe can maximize the performance of middleware running core business processes. Furthermore, an NVMe flash system enables each application server to work harder by increasing CPU utilization. Making the individual application perform more efficiently is critical in traditional applications that can only scale up, not scale out. Not only will these workloads perform faster, but they will also become more cost effective; notably, databases don’t have to be “shared” to simulate a scale-out cluster. The efficiency at the storage and the server tiers saves on slew of costs, including software licenses while at the same time reducing complexity.
The value proposition for running traditional workloads on NVMe will further strengthen as the market continues to mature. Over time, this maturity will encourage the development of more standardized components and interfaces, which will facilitate not only lower-priced solutions, but also greater flexibility in creating solutions and enhanced interoperability with existing technologies. The total cost of ownership (TCO) equation will further tip to NVMe’s favor as the associated software stack continues to develop with an eye towards efficiency, to further take advantage of lower latency. Meanwhile, the ongoing development of NVMe over Fabrics will increase the ability to create shared NVMe storage pools, increasing reliability and scalability.
View Storage Switzerland’s webinar in conjunction with Western Digital, What’s Your 2019 NVMe Strategy?, to learn more about crafting an NVMe strategy that will bring value to modern and traditional applications alike.
Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.
NVMe latency advantages for workloads with strict requirements/SLAs are absolutely beneficial, but the net/net is the old adage of “rising tide floats all boats”. Just as Ethernet climbed its way from 10Mbit to 100Mbit to 1G and then 10G/40G and beyond… faster I/O bring goodness to all.