The problem with the public cloud has always been physics. We like the idea of being able to access our data anytime anywhere, but the reality often falls short. It takes a lot of horsepower to get data from point A to point B – bandwidth, power, and time. You can always buy more bandwidth, and you can always buy more power. You can never buy more time.
The Cloud vs. Physics Problem
So if a customer would like to seamlessly migrate workloads between the data center and the public cloud, or between one public cloud and another public cloud, they find themselves facing the laws of physics. Of course they could replicate their entire data center up there, but that takes a long time. And once they are using the public cloud for anything that creates data, they may want to get that data back to their data center (or another provider) at some point. They again find themselves facing the laws of physics.
We do live in a time where it’s becoming much easier to migrate a compute workload from your data center to the public cloud, and from one public cloud to another public cloud. But if that workload creates a significant amount of data, moving that data around is a little bit harder.
What customers would like to be able to do is to use each public cloud for the things that it’s good at, and to seamlessly move data back and forth between the cloud in order to do that.
Cloud Use Cases for “Normal” Data Centers
The first workload that a lot of people migrate to the cloud was data protection. It makes so much sense to get a copy of your data stored off-site and far away from the disasters that might impact your company, and the costs of doing so have become much closer to numbers that many companies can afford. Another great use of the cloud is collaboration with multiple teams. If you’ve got an application that multiple teams need to access, having that application’s data in the public cloud makes a lot of sense, because it makes it much easier for others to access it.
There are, of course, applications that occasionally need significant amounts of compute power to accomplish their goals. Those applications can really benefit from cloud bursting, where having those applications’ data stored in the cloud allows an administrator to turn on hundreds or thousands of VMs to process that data. Finally, archiving reference data to the cloud also makes a lot of sense. Move infrequently accessed data to the cloud and pay $.05 per month (or less) to keep it alive. These are all use cases for the cloud, and they all require significant data movement to make happen.
Introducing SwiftStack 5.0
What all of these use cases have in common is the need to get the data there reliably, and to sometimes bring it back. This requires reliable, bidirectional replication. This is exactly what SwiftStack Cloud Sync is now offering. The product is able to replicate to/from a SwiftStack storage system to/from Google Cloud or Amazon S3.
Bidirectional sync enables all of the use cases mentioned above. Data can easily synchronize to the public cloud so other users and applications can access it. If there is any data created in the cloud, it can be replicated back down to the data center so that each location is up-to-date. Data can also synchronize to the cloud for data protection purposes, and older, unused data can easily be synchronized to the appropriate cloud tier.
Customers wanting to seamlessly move data between the data center and the public cloud need a solid bidirectional synchronization tool with policy management. It appears SwiftStack has all the appropriate features to give customers the ability to do this. Data protection, archiving, and bidirectional data synchronization between the data center and multiple public clouds offers a lot of flexibility to many users.