Previously, Storage Switzerland defined computational storage as an architecture that enables data to be processed within the storage device in lieu of being transported to the host server’s central processing unit (CPU). Computational storage reduces the input/output (I/O) transaction load through mitigating the volume of data that must be transferred between the main compute and storage planes. As a result, it stands to better serve modern workloads such as high-volume analytics at the edge with faster performance and better infrastructure utilization. In this installment, we will explore in more detail how a shift to computational storage impacts the data center architecture, as well as the value that it stands to bring to the business.
A primary benefit of computational storage is faster and more efficient data processing. Computational storage offloads work from the host CPU, in favor of spreading that work across storage drives. Without computational storage, a request made by the host CPU (such as an analytics inquiry) requires that all data from the storage device be transferred to it. The host CPU must then “thin down” the data prior to performing its designated pass. In a computational storage approach, the storage media takes the initial step of qualifying data for its relevance to the host CPU request, before moving that data to the main compute tier to be processed.
To further accelerate data processing speeds, computational storage devices include multi-core processors, which enable multithreading, or the execution of multiple functions in parallel. For example, a singular processor of a computational storage device can index data while simultaneously searching the drive’s data for a specific entry. Additionally, the reduced host CPU load per workload means that the main server has more processing power available to support more workloads. The business can mitigate investment in expensive, high-performance CPUs while still delivering on required application performance levels.
Another benefit of computational storage is that it makes a shared storage environment more beneficial to organizations and the most performance-hungry workloads. Typically, a direct-attached storage approach is used to serve these workloads to avoid storage network latency and to increase throughput by spreading the data across many devices. However, this often results in resource underutilization, and it also introduces further delay by needing to search more devices for the relevant data. In contrast, each computational storage drive with their multi-core processors allow applications to be ported to each drive at the same time. This provides a level of parallel processing to enable a microservices-like approach to running those applications across all the individual drives. This ability to process data simultaneously greatly reduces the time to locate the data and provide the host the results needed.
Computational storage can also help the organization to leverage their existing network infrastructure for much longer, as well as to truly scale next-generation networks. Because computational capabilities enables the storage to work on the larger data set first, it leverages higher I/O capabilities of the storage device and avoids performance being restricted by a network. As a result, the network interconnect is less critical with computational storage. While bandwidth is seldom the issue in performance-intensive environments, the constant interaction with the network is. Each network I/O adds latency, and computational storage can eliminate 80% or more of that interaction.
Computational storage stands to add value by enabling multiple applications’ performance to be accelerated on the same infrastructure, while at the same time optimizing utilization of infrastructure resources across the stack. In our next installment, we will explore in depth four key use cases for computational storage: hyperscale data centers, real-time analytics, content delivery networks (CDNs) and the intelligent edge.
Sponsored by NGD Systems