Before selecting a platform, IT planners need to understand the three requirements for Edge Computing. While some requirements are similar to any other computing center, Edge Computing has attributes that make some nice-to-have capabilities a “must-have” feature. Hyperconverged infrastructure (HCI) solutions are often part of the Edge Computing discussion, but many of them don’t meet these critical requirements.
In this article, we will compare Hyperconverged Infrastructure (HCI) to Ultraconverged Infrastructure (UCI) to help you decide on the best solution for your Edge Computing Strategy. Below is a table summarizing what you will learn:
Comparing HCI to UCI for Edge Computing
Requirement | HCI | UCI |
---|---|---|
Downward Scale | Three Node Minimum | One or Two Nodes |
Upward Scale | Finite Grow, Tight Node Coupling | Near Infinite Growth, Loose Coupling |
High Availability | Complex Failover | Seamless Failover |
Drive and Data Protection | Complex, Impacts Performance | Simple, No Performance Impact |
Remote Monitoring | Listen-Only, Vulnerable | Active Monitoring, Resilient |
Remote Operations | Per Edge Location | Across Edge Locations |
1. Edge Computing Requires Downward Scale
The first of the three requirements for edge computing is a downward scale. While the focus of most scale discussions is on scaling large, Edge computing solutions need to pack a lot of computing power, storage performance, and storage capacity into a tiny space. Frequently the Edge “data center” is a shelf somewhere in a break room or a shed in the middle of a wind farm.
As usual, the hardware to meet the requirement is available. Mini-servers like Intel’s Next Unit of Computing (NUC) enable IT to deliver a data center in a shoebox, and most vendors have ruggedized versions of the systems. These mini-servers also tend to be very power efficient. The problem is the infrastructure software.
Most HCI solutions require three nodes to get started, making fitting into tight spaces more difficult. They also typically require using VMware as the hypervisor, which means that the two-node mini-servers are now burdened with a heavy virtualization tax. The result breaks using mini-servers, and the Edge Strategy must change to include full-scale servers, which increases cost by as much as 100%, as well as power and space consumption.
The selected Edge infrastructure should not be an albatross that must be independently managed. Ideally, the same infrastructure software that drives the core data center should also drive the Edge. Establishing a single infrastructure for both Edge and Core means the software must scale downward, and it must scale upward to hundreds of nodes. It also needs to support a mixture of nodes, so it can adapt to changing workloads and technology.
2. Edge Computing Requires High-Availability
The second of the three requirements for edge computing is high availability. It may surprise some IT planners, but often if the Edge Computing device goes down, the entire remote site stops. In some cases, this means the organization is unable to capture unique data, like the weather conditions at a specific time. In other situations, it means a direct loss of revenue and potentially unhappy customers.
Delivering high availability (HA) at the Edge is challenging, though. First, HA requires redundancy, which means at least two servers, not one. If the Edge Computing software isn’t efficient enough to run on mini-servers, this means buying more full-scale servers and consuming more space and power. Second, HA requires networking. The nodes need to communicate to make sure they are working. If the solution can’t manage the network for you, that means you will have to monitor it constantly.
High-Availability also means data protection, making sure that a failed drive, which can be more likely in Edge Computing environments, won’t cause data loss. While almost every HCI solution will provide some level of drive failure protection, IT needs to pay attention to the overhead associated with it. If, for example, the HCI solution uses erasure coding, you will almost certainly have to deploy three nodes and you, more than likely, won’t be running mini-servers. Erasure coding is a heavy algorithm that steals processing power and requires complex decisions about how to protect data.
Another aspect of HA is protecting the data itself. Most HCI solutions have relatively weak data protection capabilities. They do have snapshots, but because of the overhead of the software and the lack of hypervisor integration, they can’t maintain a rich history of those snapshots. As a result, they need a separate Edge-based backup that, once again, increases cost and complexity.
Any failure or data exposure as at the Edge is significantly more expensive to repair. People and equipment need to ship to the location, and someone from IT needs to get it back online. To mitigate the impact of unplanned outages it is critical to use high-quality redundant hardware and seamless failover.
3. Edge Computing Requires Remote Operations
The third of the three requirements for Edge Computing probably comes as no surprise, remote operations. It is also not a surprise to vendors who have moved quickly to fill the gap in their capabilities. The problem is the path they chose to fill that gap, creating a separate add-on console that listens in on the remote sites. The issue with this approach is that the Edge location doesn’t know it is being monitored and can’t compensate if the monitoring add-on stops working. Moreover, the monitoring approach, by definition, means that IT is limited to what it can do operational at the Edge and generally must log in to each location to perform fixes and updates.
The add-on approach to Edge Monitoring means that IT’s operational burden becomes heavier with each additional Edge location. Eventually, the organization needs to hire people just to manage the Edge, which raises costs exponentially.
HCI Can’t Meet the Three Requirements of Edge Computing
The theoretical “go-to” for Edge Computing, HCI falls well short in meeting the three requirements for Edge computing. The various HCI solutions have problems with downward scale, in that they either can’t scale smaller than two nodes, or if they do support single or dual node configurations, they do so with many compromises. They don’t offer simple and affordable high-availability drive failure protection or data protection. And, finally, while a few HCI vendors provide remote monitoring, it is an add-on that is vulnerable to failure and limits operational capabilities.
UCI—The Solution for Edge Computing
The solution for Edge Computing is Ultraconverged Infrastructure (UCI) which is a data center operating system that integrates virtualization, storage services, and robust networking functionality into a single piece of software. VergeIO’s UCI solution, VergeOS, combines these capabilities into a common code base, enabling a more efficient operating environment that can deliver more performance and capacity on less hardware, making it ideal for Edge Computing deployments. Because VergeOS can deliver excellent performance on mini-servers like Intel’s NUC, IT can create a complete Edge Data Center in a space smaller than a shoebox.
UCI isn’t an Edge-only solution. UCI’s efficiency enables that same software to scale to dozens of nodes and utilize nodes of different types, creating a universal infrastructure for your core data center and the cloud. The result is a dramatic reduction in cost and complexity. Read “How to Repatriate Cloud Workloads,” to learn more.
UCI also delivers on the high-availability requirement. IT planners can connect two of the mini-servers together via a crossover cable, and the UCI software can manage the connectivity and redundancy from there. Data protection and data deduplication is built into VergeOS, which provides complete protection from drive failure and powerful snapshot retention, eliminating the need for a separate Edge backup expenditure.
The latest release of VergeOS, Atria, addresses the third Edge Computing requirement, remote operations. Atria has a new capability, Site Manager, that enables IT to remotely monitor and perform operation functions. Unlike other add-on approaches, Site Manager is built into the core operating system. Each Edge location sends its telemetry data to one or multiple other locations, including multiple core data centers. Furthermore, operational functions can be universally applied, like triggering a rolling upgrade or Edge-wide snapshot creation.
Conclusion
Edge Computing can make organizations more flexible and responsive, but the initiative can also bury IT in a sea of complexity and cost overruns. Selecting the right infrastructure is critical to enabling Edge Computing to live up to its potential. Ultraconverged Infrastructure has the advantage that it meets the three requirements for Edge Computing while also being a foundation for the core data center.
To learn more, join VergeIO for our one-slide webinar, “Infrastructure: Edge and Private Cloud.” During the panel discussion, discuss how to create an infrastructure to power Edge Computing and Private Cloud initiatives in your organization.
