There is no doubt specific applications have an ever growing need for more IOPS, millions in fact, and the number of organizations implementing these applications is on the rise. The traditional bottlenecks to achieving millions of IOPS are gone. Networks, thanks to NVMe over Fabrics, provides latencies that rival direct attached storage and NVMe based drives are approaching a hundred thousand IOPS per drive, yet most enterprise storage arrays can’t achieve anything close to the raw performance of the drives themselves. The remaining bottleneck is the storage software itself.
Storage software is a bottleneck because it tries to do too much. When an application writes and reads data, features like snapshots, thin provisioning, deduplication, and compression all alter data and add latency. Also, where that software runs, sharing the same CPUs as the application, can also impact the performance of both the storage and the application.
Overcoming the storage software bottleneck is the next big hurdle for vendors as they push toward creating storage architectures that support these advanced application use cases. Interestingly most modern applications, like Hadoop and Splunk, are already storage aware. In these high-performance mission critical environments, companies count on applications to protect themselves against node failures and media failures. Most applications also replicate data to remote sites to protect against site failure.
While there is some advantage to a storage system providing some of these capabilities, especially snapshots, there is inherently an associated risk of lower performance. High-performance customers are often more than willing to sacrifice the convenience of snapshots to maintain performance.
The underlying problem for the storage system is that providing these various features as it adds overhead as it processes and alters inbound data, so it can organize and optimize it. The alteration of data adds significant latency, and if that latency isn’t adequately mitigated, performance drops significantly.
The end result is that many high-performance use cases turn back to direct attach storage to avoid these bottlenecks. Traditional direct attach storage also can create inefficiencies. As a result, many modern application environments report capacity utilization levels of less than 25%.
Introducing Apeiron – The Universal NVMe Platform
Apeiron is a shared NVMe storage solution. Unlike other solutions that are architected “around” NVMe, Apeiron is a native NVMe solution. It does not alter data being written or read from shared NVMe Storage. It utilizes a layer 2 protocol network and places a 4-byte wrapper around each frame and delivers it natively to shared NVMe storage. It counts on the application layer to provide features like protection from a site, node or media failure. Apeiron essentially provides a raw, shared NVMe store that when assigned to the application appears as locally attached storage. It delivers the performance and latency advantages of direct attached storage with the efficiencies and scalability of shared storage.
What You Get in the Box
Apeiron is fundamentally software, but the company embeds that software onto FPGA’s keeping performance high, allowing the CPU dedicated to the applications the storage system is responsible for supporting. Apeiron provides an NVMe interface card that IT installs into the application server. That interface card includes an FPGA with the client software embedded on it. The result is no overhead on the server and a native connection to the shared storage.
The shared storage component is a scale-out design built from a series of 2U storage nodes. Each node has 24 NVMe 2.5” SSDs. Each node includes 32 Apeiron Data Fabric 40Gb/s Ethernet ports creating a fully integrated switch fabric. The nodes also have redundant power supplies and cooling modules. Nodes can have capacities ranging from 38TB to 360TBs today (with 720TB and beyond by the end of 2019) and leverages NVMe fabric for nearly unlimited scale.
From a performance perspective, each 2U node can generate 18.4 Million IOPS. That is not a typo. Each node can generate 18.4 Million IOPS with very low latency. Apeiron claims a protocol overhead of fewer than three microseconds but today is “bottlenecked” by the latency of NAND flash (100 microseconds) and Intel Optane (ten microseconds). Essentially the Apeiron software successfully moves the latency bottleneck back to the storage media. In a test on specific use cases, Apeiron claims 88x performance over the typical Splunk Architecture and a 49x performance increase over the typical Hadoop Architecture.
StorageSwiss Take
Today there are dozens of workloads that need double-digits million IOPS. Security and event management, business analytics, life sciences, and drug discovery, financial services, life sciences, research, modeling, and visualization are just but a handful. In these environments, greater performance leads to faster results, increased profits and better decisions. Storage software is the bottleneck to achieving multi-million IOPS of performance. Vendors need to improve the quality of software, improve how they host their software within the storage architecture and simplify the add-on services the software provides. The goal is to move the bottleneck back to storage media, something it seems that Apeiron has achieved.
Having worked with Apeiron since the very beginning, it has really been enlightening to see the multiple inflection points their products deliver and the multiple use cases and porblems their solution solves. We have always been proud to be an Apeiron partner and now were moving to help deliver greatly needed resources for Cyber with the ADS 1000. Anyone who does not look at this company is cheating themselves and their technological road-map.