Simply put, the distributed cloud is the intersection point of edge computing and cloud consumption. It is no secret that IT-as-a-service delivered through the cloud is continuing to proliferate, on the back of business requirements for greater IT agility and elasticity and a lower cost model. At the same time, however, the public cloud by and large cannot deliver the extremely low levels of latency that some applications (such as a virtual reality workload) require. Furthermore, these applications are typically highly data intensive, meaning that massive levels of network bandwidth would be required to stream this data from the public cloud to the user. The public cloud may also run afoul of growing data privacy and sovereignty requirements. And finally, it offers no autonomy; if the public cloud service is down, the application comes down. A growing number of edge data centers are springing up to address these needs. The ability to deliver cloud-like flexibility at the edge, whereby workloads may be spun up and down and associated resources paid for as they are needed and consumed is the distributed cloud.
The problem with the distributed cloud is that it adds massive costs and complexities associated with network management, and management of data and the control plane. Whereas providing network connectivity to a single data center is challenging enough, the distributed cloud adds a vast number of data centers, users and devices that must be connected across the globe. The ability to provide multitenancy, and to meet a variety of application-specific service level agreements (SLAs) come in to play – and are far from an easy feat in a distributed cloud architecture.
Pluribus Network’s Chief Marketing Officer, Mike Capuano, recently joined George Crump, Storage Switzerland’s Founder and Lead Analyst, for a Lightboard Video discussion regarding his company’s unique take on connecting the distributed cloud.