Applications like Elastic, Hadoop, Kafka and TensorFlow typically operate on scale-out architectures built from dozens, if not hundreds of servers, which act as nodes in the application’s cluster. Many organizations now use a mix of these applications to derive the outcomes they need to be productive. Data centers today increasingly consist of a growing number of these clusters but with limited ability to share resources between them. With each application siloed, it is necessary to manage them separately.
There is also an increasing use of graphics processing units (GPUs) in these environments to facilitate faster results. These GPUs are much more expensive than the typical commodity hardware, so the lack of an ability to share GPUs use across the various clusters is particularly frustrating.
It seems that the lack of efficiency is an ideal opening for shared storage vendors to enter a market from which network latency perviously excluded them. The cost advantages of an efficient storage infrastructure plus the advent of Non-Volatile Memory Express over Fabric (NVMe-oF) storage networking makes shared storage more compelling in these environments. However, storage is only part of the problem since the compute, which runs the application is also dedicated to that application. Additionally, there is also the reality that it takes time to convert an environment from traditional IP protocols to NVMe-oF.
Using Composability to Knock Down Silos
The answer to the silos of clusters problem is Composable Infrastructure, which enables organizations to allocate and decommission servers for a given application within minutes. A typical composable infrastructure has a resource pool of servers, GPUs and storage. There will also be a minimally viable cluster of each application type. When the organization needs to scale-up one of these applications to process an intensive job, it can allocate as many servers and GPUs as required from the pool as well as additional storage. Once the job is complete, it returns the resources to the pool for another application to use. IT can initiate composability either manually or programmatically, based on workflows.
Composable Infrastructure can enable an organization to take better advantage of NVMe-oF but it does not require the organization to move to NVMe-oF immediately. Composable Infrastructure is a logical first step on a path that may lead to NVM-oF.
To learn more about using Composable Infrastructure to optimize hyperscale environments like Elastic, Hadoop, Kafka and TensorFlow, register for our live webinar “20 Minute Introduction to Composing Infrastructure for Elastic, Hadoop, Kafka”. After you pre-register for the live webinar you will receive a copy of our latest eBook “Is NVMe-oF Enough to Fix The Hyperscale Problem?“ soon after you register, no waiting for the live event.