Does the Storage Media Matter Anymore?

It used to be that the storage media was the determining factor of an application’s performance. However, with non-volatile memory express (NVMe) solid-state drives (SSDs) now having reached price parity with Serial-Attached SCSI (SAS) SSDs, the potential to achieve hundreds of thousands of input/output operations per second (IOPS) has practically become commoditized. The problem is that these drives are now exposing new application bottlenecks – primary being the system architecture, the latency of the storage network and the efficiency of the storage software stack.

Why the Storage System Architecture Matters

Fully capitalizing on very fast storage media requires the components of the storage system architecture around the media to come together in an optimized manner. How the various drives are interconnected, how those drives connect with internal storage switches, and how this network of drives connects to the CPU, will all impact the application’s performance levels. For example, the CPU must have enough PCI lanes to avoid being overwhelmed by the speeds at which the NVMe SSD feeds it data.

Why the Storage Network Matters

NVMe-over Fabrics (NVMe-oF) is on a path to production deployments within the next year, and with it the promise of performance on par with direct-attached storage implementations, but with the superior efficiency of networked storage. However, the transition to NVMe-oF will not be rip-and-replace. NVMe-oF implementations will need to be able to plug into existing infrastructure, in order to support the mixed node environments that will exist.

Why the Storage Software Matters

Legacy storage software algorithms are plagued with inefficiencies, because storage media used to be so slow that, within reason, application performance was not impacted by the efficiency, or lack thereof, of the storage software. That is no longer the case with NVMe SSDs and continued development of CPUs. For example, storage software needs to be written to take advantage of the higher-core count processors that have hit the market. Data protection and reduction services like compression, deduplication and snapshots are becoming table stakes with high-core count processors. Customers do not want to compromise between these capabilities and performance. These data services must be computationally efficient, and they must be used intelligently, where they will have a substantial enough impact to justify any performance overhead.

Violin Systems recently joined Storage Switzerland for a discussion on how to architect the flash storage system to optimize performance. Access our on demand webinar, “Flash Storage – Deciding Between High Performance and EXTREME Performance”, to learn more.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Senior Analyst, Krista Macomber produces analyst commentary and contributes to a range of client deliverables including white papers, webinars and videos for Storage Switzerland. She has a decade of experience covering all things storage, data center and cloud infrastructure, including: technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her previous experience includes leading the IT infrastructure practice of analyst firm Technology Business Research, and leading market intelligence initiatives for media company TechTarget.

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,833 views
%d bloggers like this: