Many enterprises have started or are well down the path of exploring the opportunities presented by artificial intelligence, deep learning and machine learning. The result of all of these initiatives is the cognitive era where systems almost appear to think for themselves. A vital element of the cognitive era is its storage infrastructure.
What is the Cognitive Era?
Artificial intelligence, deep learning, and machine learning are all forms of analytics. Machine learning is the phase with which most organizations are currently on their way to a cognitive future. Machine learning leverages big data to analyze it and make predictions based on that analysis. At that point, in most cases, a human has to get involved to make a decision or prescribe solutions based on the predictions. Deep learning analyzes more active data, more rapidly to prescribe solutions. Artificial intelligence takes the final step, analyzing data in real-time, as it’s created and takes actions on its own based on that analysis, thus ushering in the cognitive era.
The Storage Implications of the Cognitive Era
For an organization to make the journey from big data and machine learning, to the cognitive era, their storage architecture needs to change. Most big data storage designs count on direct attached storage (DAS), and data moving to the appropriate compute node when it needs analysis. The cognitive era relies on real-time access to data. It can’t wait for data to move, instead it has to access it directly. Not only does storage need to respond quickly to analysis demands, it also has to deliver that data quickly, almost instantly, to the requesting node.
In the cognitive era, networking, both internal to the storage system and the external connection to the computing clusters, is critical. It needs to deliver bandwidth and latencies that are similar to if not equal to DAS. The challenge is that there is a variety of storage networking methods and each is upgrading their capabilities to deliver similar performance to DAS. Cognitive storage vendors though need to support the variety of network options and not force a customer to switch out their infrastructure.
The cognitive environment scales similarly to machine and deep learning infrastructures by adding additional nodes to the computing cluster. Those nodes are increasingly equipped with GPUs (Graphics Processing Unit) to improve processing performance further. The data within the cognitive storage infrastructure needs to be accessible not only to all nodes at the same time; it also has to respond to multiple requests for data in parallel.
Similar to the machine learning era, the cognitive era requires access to massive amounts of data, so not only does the cognitive environment need fast access and rapid response it also requires cost-effective capacity. The storage system also needs to deliver high density to keep floor space consumption to a minimum.
Vexata 3.5 – End-to-End NVMe
Storage vendors are on a similar journey to enterprises. To improve performance, many have integrated NVMe flash media into their systems. Internal NVMe improves internal performance, but it may expose software inefficiencies and places extra stress on the internal storage controller which, in most cases is processing both I/O and control traffic. In its 2017 VX-100 product release, Vexata addressed the internal computing challenge by providing complete separation of the control and data paths, sending the I/O traffic through a FPGA (Field Programmable Gate Array) to provide consistent performance at scale.
Each node in a Vexata storage system uses an FPGA complex called the VX-OS Router. The VX-OS Router is an acceleration engine for cut-through, high bandwidth, and reliable I/O distribution. It also handles services like RAID calculations and encryption delivered over standard 32Gbps Fibre Channel (SCSI) host interfaces. By separating the control path, the Vexata architecture can utilize a standard X86 processor to manage all of the VX-OS control functions, keeping performance up but costs down. The resulting solution delivers the full benefit of NVMe performance using standard SAN architectures at price points at or below existing AFAs.
In its 3.5 release, Vexata adds support for NVMe over Fabrics (NVMe-oF) so that it can deliver excellent performance for environments that need NVMe performance delivered directly to the hosts. Unlike many of its competitors, Vexata supports both NVMe over Fibre Channel and NVMe over Ethernet, allowing the organization to keep its investment in networking. With this new NVMe-oF capability, the VX-100 system provides 80GB/s of throughput with sustained latencies under 100uS.
In the end, cognitive computing is all about low latency access to vast amounts of data. Vexata, unlike many of its competitors, addresses the two key latency chokepoints. Inside the storage system, its FPGA provides a 100X latency reduction over leading all-flash array vendors. Externally its support of NVMe delivers a 2X to 4X reduction in latency when connecting to the computing tier. For organizations embarking on a cognitive journey, Vexata makes an excellent foundation.