The density of flash storage is increasing every year, 64TB solid state drives (SSD) are arriving next year, and within a year the data center may see 100TB+ SSDs become commonplace. Another significant change is that the data center, thanks to needs like big data analytics, image recognition, machine learning, and artificial intelligence, requires all the performance that these systems can deliver.
The data center is creating hyper-scale compute farms to process the data sets required by these various use cases, and the SSD currently provide just the performance and the capacity needed. The problem is with the interconnects as even the latest PCIe connections can’t support the potential performance of a petabyte of data streaming from ten 100TB SSDs. In these environments, something must change.
That change may come in the form of Computational Storage using In-Situ Processing, which puts processing power on the SSD itself. Computational Storage can process data on the drive and only send the information needed back to the main CPU. Saving on bandwidth, memory footprint and power consumption.
Computational Storage Use Cases
For example, when 100TB SSDs come to market, an image repository of 50PBs can be stored on as little as fifty 100TB SSDs. When the organization wants to see if a new image already exists in the repository an application needs to scan it to see if there is a potential match. Without computational storage, the compute farm must scan the entire 50PB, sequentially across the network, as rapidly as possible for potential matches. Analyzing all this data across the network requires the organization to invest heavily in network and memory resources to maintain performance.
With computational storage, the 50 drives can individually process the image request. Not only are there more processors available, each processor is working with a smaller sub-set (1/50th) of the data. Each drive that finds a potential match then sends that match back to the main compute tier for final analysis. It is important to note that computational storage sends only the relevant data across the network. In many cases, computational storage reduces network bandwidth requirements by 90% or more. The result is a faster time to answer and much less investment in the network and memory.
NGD Systems with In-Situ Processing
NGD Systems is at the forefront of Computational Storage leveraging patented In-Situ Processing. With NGD, each drive has 4 ARM application cores as part of a single NVMe SSD Controller. This dedicated processing power runs applications, while other resources manage flash management. To get an organization’s applications running on the drive is straightforward. NGD supports various APIs from different application vendors and almost any Docker containerized application runs as is, on the drive.
Beyond use cases like machine learning and artificial intelligence are edge use cases like the Internet of Things. In these situations, there is a limited amount of room for compute resources. Combining all those resources with the added processing power on an NGD storage device reduces the footprint in the IoT device and simplifies the networking.
To learn more about Computational Storage with In-Situ Processing watch our latest LightBoard video.