Silos of Clusters – Is this the End Result of Data Center Modernization?

Applications like Elastic, Hadoop, Kafka and TensorFlow typically operate on scale-out architectures built from dozens, if not hundreds of servers, which act as nodes in the application’s cluster. Many organizations now use a mix of these applications to derive the outcomes they need to be productive. Data centers today increasingly consist of a growing number of these clusters but with limited ability to share resources between them. With each application siloed, it is necessary to manage them separately.

There is also an increasing use of graphics processing units (GPUs) in these environments to facilitate faster results. These GPUs are much more expensive than the typical commodity hardware, so the lack of an ability to share GPUs use across the various clusters is particularly frustrating.

It seems that the lack of efficiency is an ideal opening for shared storage vendors to enter a market from which network latency perviously excluded them. The cost advantages of an efficient storage infrastructure plus the advent of Non-Volatile Memory Express over Fabric (NVMe-oF) storage networking makes shared storage more compelling in these environments. However, storage is only part of the problem since the compute, which runs the application is also dedicated to that application. Additionally, there is also the reality that it takes time to convert an environment from traditional IP protocols to NVMe-oF.

Using Composability to Knock Down Silos

The answer to the silos of clusters problem is Composable Infrastructure, which enables organizations to allocate and decommission servers for a given application within minutes. A typical composable infrastructure has a resource pool of servers, GPUs and storage. There will also be a minimally viable cluster of each application type. When the organization needs to scale-up one of these applications to process an intensive job, it can allocate as many servers and GPUs as required from the pool as well as additional storage. Once the job is complete, it returns the resources to the pool for another application to use. IT can initiate composability either manually or programmatically, based on workflows.

Composable Infrastructure can enable an organization to take better advantage of NVMe-oF but it does not require the organization to move to NVMe-oF immediately. Composable Infrastructure is a logical first step on a path that may lead to NVM-oF.

To learn more about using Composable Infrastructure to optimize hyperscale environments like Elastic, Hadoop, Kafka and TensorFlow, register for our on demand webinar “20 Minute Introduction to Composing Infrastructure for Elastic, Hadoop, Kafka”. After you register you can access our latest eBook “Is NVMe-oF Enough to Fix The Hyperscale Problem?“.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer of StorONE. Prior to StorONE, George spent almost 14 years as the founder and lead analyst at Storage Switzerland, which StorONE acquired in March of 2020. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Prior to founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,786 other followers
Blog Stats
  • 1,869,684 views
%d bloggers like this: