Why Hyperconverged Infrastructure for Edge Environments?

Hyperconverged infrastructure (HCI) continues to quickly gain market traction, primarily on the promise of simplifying management and cutting infrastructure costs. With maturity come greater demands, in the HCI case most notably around performance and capacity. IT organizations have begun looking to hyperconverged infrastructure to run the modern workloads that require massive data ingestion and minimal latency and are increasingly running today’s economy.

Edge computing is a prime example of such a future forward workload that, in theory, is well suited to be deployed on hyperconverged infrastructure. Compute and storage requirements at the edge are increasingly demanding. These environments must keep up with rapidly accelerating data generation and with increasing pressure from the business to rapidly analyze this data for competitive advantage. The simplicity and cost efficiency that are (at least typically perceived) hallmarks of hyperconverged infrastructure stand to bring tremendous value in terms of helping IT to keep pace. Furthermore, in facilitating a streamlined path to dedicated private cloud infrastructure, hyperconverged infrastructure offers a middle ground between shipping data from the edge back to a centralized data center, or to the public cloud – a balance of latency and control that is ideal for the edge.

At the same time, multi-tenancy, flexible scalability, and high-volume, low latency processing as close as possible to where the data is generated are table stakes for edge environments. This is due to the need to accommodate data sprawl and growing analytics implementations. These capabilities require premium components including high-performance CPUs and NVMe media – running counter to the “first generation” hyperconverged infrastructures that were designed to take advantage of the capex economies of commodity hardware.

IT organizations looking to bring simplicity and cost efficiency to edge computing environments without sacrificing on capacity or performance should consider a more advanced hyperconverged infrastructure solution (“HCI 2.0”) that utilizes enhanced hardware. Storage Switzerland’s recent blog, What is HCI 2.0, has further discussion about how we are classifying this next generation of hyperconverged infrastructure.

When it comes to serving edge environments specifically, IT organizations should focus on maximizing throughput and drives per server node with their infrastructure. Such a hardware architecture can help to provide the bandwidth and consistent, low latency required by data-intensive workloads, while at the same time decreasing costs by maximizing CPU utilization and minimizing socket-based licensing costs.

For additional conversation about the future of hyperconverged infrastructure and why the hardware itself matters, access Storage Switzerland’s webinar in conjunction with Axellio, How to Put an End to Hyperconverged Silos.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Senior Analyst, Krista Macomber produces analyst commentary and contributes to a range of client deliverables including white papers, webinars and videos for Storage Switzerland. She has a decade of experience covering all things storage, data center and cloud infrastructure, including: technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her previous experience includes leading the IT infrastructure practice of analyst firm Technology Business Research, and leading market intelligence initiatives for media company TechTarget.

Tagged with: , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,157 other followers

Blog Stats
  • 1,496,527 views
%d bloggers like this: