Modern applications like big data analytics, facial recognition, IoT and video streaming, as well as next generation applications like artificial intelligence and machine learning, place unique demands on both the compute and storage infrastructures.
Most of the modern and next generations operate against a vast store of data. The application makes a query against that data set, which is processed in some way yielding the answer. They all require vast amounts of data to be stored and then count on compute to identify the subset of data the application needs to respond to the queries.
The Problems with Moving Storage Closer to Compute
The most common strategy to addressing the challenges presented by modern and next generation applications is to move storage closer to compute. The strategy is to install storage inside each compute node and let the data reside there. Each query requires a large section of data, in some case all of it, is sent to the compute tier to identify the needed sub-section.
Moving storage to the compute as a strategy does reduce network latency. However, while the CPU-to-media interconnect has improved with advancements like NVMe, there is still latency in the connection. There is also a complication of making sure the right process has access to the right data locally.
Moving Compute Closer to Storage
The first step in the process for most of these modern and next generation applications is to reduce the working set of data. Essentially, if the data set is the haystack, the application lives to find the needles in that haystack. If this is the case, it may make more sense to move the compute to the storage. That way the media can perform the data reduction or qualification before data is sent to the main compute tier.
For example, a facial recognition program searching for Elon Musk dressed in black may send out a request to each drive for images of Elon Musk. Those images are sent to the main compute tier which does the more fine-grained search for Elon Musk wearing black.
The first value of such an architecture is the compute for the environment scales, and does so at a very granular level, at the drive. The second value is the amount of bandwidth required to transfer the data to the main compute tier is greatly reduced since the drives are sending a much smaller subset of data instead of all of it. The third value is the compute tier does not have to scale as rapidly because the drives are doing more work.
Introducing NGD Systems
NGD Systems is announcing the availability of the industry’s first SSD drive with embedded processing. This is not a processor for running flash controller functions (it has that too). This is a processor specifically for off-loading functions from the primary applications. Developers of these modern and next generation applications will find adopting their applications to take advantage of the new drives relatively straightforward. The NVMe Catalina 2 is now available in PCIe AIC and U.2 form factors.
In-Situ Processing
While not a controller company, NGD Systems does incorporate an “in storage” hardware acceleration that puts computational capability on the drive itself. Doing so eliminates the need to move data to main memory for processing, reducing bandwidth requirements and server RAM requirements. It also reduces the pace the compute tier needs to scale, which should lead to reduced power consumption.
Elastic FTL
Beyond onboard compute, the drives themselves also have top notch controller technology. The controllers (separate from the compute) on the NGD Systems SSD use proprietary Elastic FTL and Advanced LDPC Engines to provide industry leading density, scalability and storage intelligence. It enables support of the event changing availability of drive types including 3D TLC NAND, QLC NAND as well as future NAND specifications. The company also claims the lowest watt-per-TB in the industry.
StorageSwiss Take
Moving compute to the storage is the ultimate in “divide and conquer,” which may be the best strategy for applications needing to operate on large data sets. If every drive in the environment can reduce the amount of data that needs to be transferred into main memory for processing the environment becomes infinitely more scalable.
Unlike many flash memory announcements, the NGD Systems solution should have immediate appeal to hyperscale data centers looking to improve efficiency while increasing response times.
NGD Systems will show a demonstration of the technology at Flash Memory Summit 2017, August 8-10 in Santa Clara, CA. Vladimir Alves, CTO and co-founder of NGD Systems, will also make a presentation on August 10th, at Flash Memory Summit Session 301-C, entitled, “Get Compute Closer To Data.”
Storage Switzerland is at Flash Memory Summit 2017. Click here to see all our FMS2017 posts.