Composable infrastructure, whereby resources are disaggregated and may be re-composed on the fly, stands to serve a number of key, future-forward IT infrastructure requirements. Resources may be added and then returned for use by a different application on the fly. Those resources can also be better utilized, because they are not left sitting idle, tied to a singular application or system. Serving the most demanding workloads requires a composable architecture that can get granular, essentially treating the IT infrastructure as trays of compute, storage and networking resources down to the levels of graphics processing units (GPUs) and solid-state drives (SSDs), for optimal efficiency.
Liqid Unified, Multi-Fabric Composable Infrastructure
Liqid provides software (The Liqid Command Center) and a top of rack peripheral component interconnect express (PCIe) switch that enables bare metal resources (including GPUs, field-programmable gate array (FPGAs), central processing units (CPUs), Non-Volatile Memory Express (NVMe), network interface cards (NICs), and Intel Optane memory) to be disaggregated and then dynamically provisioned over a PCIe fabric. Storage Switzerland provided a more detailed overview of this architecture in a prior briefing note.
Focusing on PCIe connectivity was an important starting point for Liqid, because PCIe is the accepted standard for connecting storage, compute and networking. At the same time, PCIe has some physical limitations and does not work as well in a shared storage environment as it does within a single rack. To enable resources to be composed across racks, and even across various data centers, Liqid added multi-fabric support for not only PCIe Gen3 and Gen4, but also for Ethernet and Infiniband networking, in its upcoming release of its Command Center 2.2 software, which is slated for 2H19. The software will also support upcoming open systems interconnect specifications from the Gen-Z consortium, which Liqid is a part of and is actively contributing to shaping the new specifications.
Liqid – Dell EMC Alliance for AI and ML
Liqid’s key differentiators, compared to composable infrastructure peers, are its ability to compose bare metal as opposed to virtualized resources, which provides greater granularity in terms of how resources are composed, and in its focus on providing the lowest possible latency across PCIe. These capabilities make the Liqid Command Center a strong foundation for artificial intelligence (AI) and machine learning (ML). These workloads are uneven, with the hardware resources that they require depending on the task at hand. For example, the NIC is heavily utilized during the data ingest phase, while the data training phase leans heavily on GPUs. This leaves expensive resources underutilized and running idle, consuming valuable data center floorspace, and power and cooling resources. Data and the workload may even need to migrate across multiple dedicated systems depending on the phase that it is at. Additionally, AI and ML projects tend to start small and then scale rapidly, but in a traditional infrastructure model it takes times to onboard resources. With Liqid Command Center, resources can be adapted according to the phase that the AI or ML workload is at, Additionally, multiple AI and ML workloads can run in parallel (resources may be shared across multiple workloads). GPU, SSD and other resources may be flexibly and independently added as they are needed. Also, it is unnecessary to migrate data.
Liqid has collaborated with Dell Technologies’ OEM and Internet of Things (IoT) Solutions division to integrate its composable infrastructure software with Dell Technologies’ PowerEdge Servers, into a Composable Artificial Intelligence solution. The offering bundles Liqid Command Center software and a PCIe Gen3 Liqid Fabric switch with Dell EMC compute nodes that each carry two Intel Xeon Scalable processors with up to 24x DDR4 DIMMs, which leverage a PCIe x16 adapter to connect to the Liqid switching fabric. It also includes a PCIe-connected expansion chassis that holds up to 8x GPUs and 8x NVMe SSDs. Liqid and Dell EMC also offer GPU and NVMe SSD solution bundles that are well-suited for AI and ML, as well as other performance-intensive applications and environments, such as IoT data processing at the edge.
Minimizing the data center footprint and maximizing simplicity and utilization is critical for applications like AI, ML and environments such as the edge. Liqid offers a path to infrastructure resources that are continuously balanced according to fluctuating workload requirements, while avoiding ripping and replacing existing infrastructure resources. It also offers full freedom of hardware choice (including how discrete components are added into the existing environment). Liqid’s alliance with Dell EMC provides a new procurement vehicle for customers that are looking for a more turnkey starting point.
The addition of multi-fabric support makes Liqid more agnostic to the underlying fabric type across which these resources are delivered. It is also bringing Liqid’s value proposition to shared storage environments (for example, breaking down resource silos, extending the life of existing infrastructure, and enabling the right equipment to be added according to the job at hand). This will broaden Liqid’s applicability.