Modern workloads such as analytics of data generated at the edge require vast amounts of data to be processed in parallel. As discussed in our previous blog, the advent of solid-state drive (SSD) storage media has caused the interface between the flash controller and the Storage Bus, in new cases being PCIe, to become the bottleneck to delivering levels of performance required by these applications. Common workarounds, including moving storage internal to the server and applying more advanced networking capabilities to shared storage result in a number of problems, including underutilization of capacity, the creation of a single point of failure, and a more complex application architecture.
Enter Computational Storage
Computational storage, or the process of placing compute capabilities directly on storage to enable data to be processed in place where it resides, stands to solve the issue of flooding a server’s PCIe Bus. Effectively, it precludes the need to move 100% of the data from the storage media to the host central processing unit (CPU) by bringing intelligence to the storage media itself. Key benefits of computational storage are faster sorting of data, analytics or searching, as data reduction and qualification occurs before the data is sent to the host CPU, which results in the system’s overall throughput being increased. In addition to facilitating faster processing, this also enables the host CPU to be better utilized and to scale across a larger number of workloads – thus cutting costs for the enterprise. Cost efficiencies are further increased through a reduced power envelope, as well as through reduced demand for server dynamic random-access memory (DRAM) and network bandwidth resources.
Computational storage is especially valuable in workloads such as big data analytics, artificial intelligence and machine learning, because it enables data relevant to the query to be pinpointed on the storage media itself, so only that specific data set needs to be transported back to the host. However, use cases do not stop there. They include search and filtering queries such as grep programs, encryption and key management, and also extend to the growing number of edge data centers that are emerging with the advent of the distributed enterprise. More detail on key use cases will follow in our final blog in this series.
Must Applications Be Re-Written to Take Advantage of Computational Storage?
As with the migration to any new technology, computational storage should be evaluated for its compatibility with key applications. When evaluating computational storage solutions, enterprise customers are typically concerned that transitioning to the architecture will be a complex process that will put the availability of their critical applications at risk.
Computational storage pioneer NGD Systems eases transition to the architecture. It calls its approach to computational storage “In-Situ Processing.” Its architecture uses a quad -core ARM CPU per device, as well as hardware acceleration and a DRAM buffer, that all run a local operating system (OS) per drive. A key component of the computational storage value proposition is being able to capitalize on growing SSD densities by enabling the host CPU to scale across a larger number of drives. For its part, NGD has focused on maximizing drive density, offering up to 16 terabytes (TB) of capacity in its Newport Platform.
In order for NGD Systems’ architecture to run applications, the only requirement from the host is to recompile any applications on an x86 architecture to the ARM 64-bit architecture. NGD Systems’ graphical user interface (GUI) provides users with tools to enable, disable and transport applications from the host system to each drive OS for execution and completion. This then allows the host system to only have to manage and execute the application on the data provided by the individual drives creating a net reduction of data movement.
Our next blog will dive more deeply into the impact of computational storage on data center design.
Sponsored by NGD Systems