Modern data storage has become plagued with inefficiency. Legacy software algorithms were not written to keep up with the new levels of throughput and latency facilitated by solid-state disk (SSD) media and non-volatile memory express (NVMe) access protocols. Additionally, the storage stack has not seen expected optimizations to take full advantages of higher capacity and changing endurance characteristics of modern SSD drives. This problem is pushing data center managers to substantially overprovision resources including central processing units (CPUs) and storage drives to deliver the levels of performance and capacity demanded by modern workloads such as big data processing, artificial intelligence, machine learning and cloud databases.
Pliops, an Israel-based company that just received Series B funding, is developing a hardware based storage processor that offloads some tasks typically handled by storage software with the objective of extracting greater performance from CPUs. In particular, the processor is being designed to facilitate adoption of lower-cost flash as well as more scalable delivery of data-driven and performance-hungry workloads such as machine learning and cloud database processing.
Because the processor is slated for release in late 2H19, specific technical details are still not available. Fundamentally, Pliops is designing the processor to collapse multiple inefficient and redundant storage software layers into a simplified stack to accelerate compute intensive functions and eliminate bottlenecks. The processor will be able to sit in conjunction with the storage media and offload data management traffic as well as storage services such as erasure coding. From this standpoint, it stands to add value to new storage media and access protocols (including SSDs and NVMe) as well as cloud-friendly software-defined architectures (including hyperconverged infrastructure) by enabling more data to be served directly to the application.
According to Pliops, the processor enables data centers to access data up to fifty times faster with one-tenth of the power consumption and computational load, due to substantial improvements facilitated in areas such as write throughput, latency and network bandwidth. For instance, Pliops claims that it can increase throughput of cloud databases such as MySQL and Cassandra by more than ten times, while cutting the compute load by 90% and the network traffic by 20 times.
Pliops currently has eight core patents pending, and its founders bring experience with NAND and SSDs at other companies, including Samsung and XtremIO, to the table. The company most recently secured $30 million in funding from venture capital and strategic investors to accelerate the development of the processor – which includes the opening of a U.S. headquarters in San Jose, California.
Pliops counts heavyweights including Intel, Xilinx and Western Digital as investors, and it will collaborate with these companies from a go-to-market perspective – which should help to facilitate early adoption. Hyperscale cloud service providers and telcos will be Pliops’ initial target accounts, as it works to build case studies to leverage as inroads to the broader enterprise.
Inefficiencies at the storage software layer have become the leading problem when it comes to effectively serving the cloud-hosted, data-centric workloads that are today’s norm, without breaking the bank. Pliops offers an interesting and unique, hardware-grounded vision to solving this problem. Its processor stands to improve resource utilization and scalability for enterprise workloads such as big data processing at the edge, and cloud database hosting. Storage managers should stay tuned for proof-of-concept validation around their target workloads coming out of large-scale service provider environments later this year, which will help to facilitate a purchase decision.