Enterprises are relying on a growing number of applications to run their business. At the same time, these applications carry requirements including performance levels that are simultaneously more divergent and dynamic than ever before. New applications are being spun up frequently, the applications running core business processes carry a broad range of service level agreements (SLAs), and their demands on compute, storage and networking resources fluctuate often.
Previously, IT professionals could use vendor-provided benchmark statistics for a common set of applications to justify a new infrastructure purchase with reasonable confidence. However, relying solely on these metrics is no longer a best practice as the application ecosystem splinters and becomes more dynamic.
This holds especially true in the storage market. The advent of solid-state disk (SSD) media and non-volatile memory express (NVMe) access protocols introduces a new tier that is premium in not only performance, but also in price. Additionally, vendors’ performance claims vary widely. For storage professionals, confidently buying into NVMe requires truer and more continuous visibility into how NVMe arrays will impact their unique application ecosystem’s performance.
The expense of migrating to NVMe makes it important for storage professionals to ensure that applications will receive a substantial and a sustainable benefit to performance (most notably from the standpoint of latency). NVMe typically requires an overhaul to more expensive infrastructure; existing storage arrays are not usually retrofitted for NVMe, and faster-performing central processing units (CPUs) and networking infrastructure are required to obtain the full benefits. Furthermore, there are a wide range of NVMe solution options that storage buyers have to choose from.
Vetting solutions is no easy or cheap feat, however, as testing infrastructure requires equivalent levels of investment in data center networking. Storage professionals should instead consider a production workload modeling solution, such as that from Virtual Instruments and SANBlaze that Storage Switzerland recently covered.
At the end of the day, storage is only one piece of what impacts applications’ performance and it is IT professionals’ job to ensure these performance levels. With workload simulation, storage buyers can receive hands-on experience with NVMe, in a manner that factors all elements of the enterprise’s unique applications and infrastructure. This provides data that is true to that specific environment, from which storage buyers can make more informed buying decisions. Furthermore, workload simulation can facilitate more ongoing testing and validation to better keep up with the rapid pace at which new innovations are coming to market.
Hear more from experts at Storage Switzerland, Virtual Instruments and SANBlaze on using workload simulation to support a smart shift to NVMe by accessing our on demand webinar.