Sometimes you feel like a cloud. Sometimes you don’t. That’s the idea behind the boundary-less data center. Sometimes you want to run a workload in the public cloud and sometimes you want to run it on your hardware. A boundary-less data center allows you to make that decision at any time based on the needs of the organization and the application at that moment.
The 3 Cloud Use Cases
There are times when you might want to move a workload to or from the cloud: cloud migration, cloud recovery, and dev/test in the cloud. Each of these requires migrating both the data and the VMs that make up the workload into or out of the cloud. Let’s take a look at each use case.
If a customer wants to migrate one or more workloads to the cloud, they need that process to be seamless. The data needs to be replicated over time and VMs need to be created in the cloud with similar compute, storage and network characteristics that are present in the data center. That way, once workloads are migrated to the cloud, they continue to perform the same way they did in the local data center.
A very similar process is the idea of recovering to the cloud. If a server or VM running a particular workload is continually backed up to the cloud, an image of that server can always be available. A customer needs to only activate that VM by initiating a disaster recovery (DR). This is very similar to a migration, except that it’s a continuous migration without actually making the move – unless something bad happens. It is much less expensive and much easier to test DR when it’s simply a matter of pushing a few buttons and running VMs in the cloud.
The other easy use case to understand is automated cloud dev/test. Using processes similar to the ones mentioned in the migration and cloud recovery paragraphs, a production image can be pushed into the cloud for development and test purposes. Like the cloud recovery option, you can also continually update the image if need be. Reducing the effort required to spin up a lab can significantly ease the challenges of development and test environments.
Storage Switzerland was briefed recently by a company called CloudVelox. It automates the replication of entire workloads into the cloud, just as was mentioned above. It claims that using the product requires no specialized knowledge of the cloud, and supports continuous replication of all application and user data. It also offers a single solution to meet each of the above use cases.
The CloudVelox approach starts with an OS-based agent that installs on source systems without requiring a reboot. This agent runs in user space. The agent scans the systems for compute, storage, network and security characteristics and creates a “blueprint” that is stored in the cloud. The blueprint consists of OS type, version, speed and utilization of CPUs, amount of memory, network and IP structure, storage size and types, open ports, etc. This blueprint is used to provision the required cloud resources and create cloud runnable systems on demand.
The agent replicates the entire application workload into storage volumes. This eliminates the need for running virtual machine instances (corresponding to the systems in the application workload) during the synchronization process, which reduces cloud costs. VM instances are created when the application workload needs to run in the cloud. The synchronization process replicates each source system’s configuration, “blueprint”, and file systems to the cloud and then continually updates the cloud information with any changes that occur on the source systems.
Once the initial replication is complete, the product can continually update the destination environment, so it is ready for use at any time. It can also operate in what CloudVelox calls “pilot light” mode, where the application is “ready-to-go” in the case of a fail-over, but computing costs are not being incurred on the cloud (there will be storage costs of course).
The product supports both Windows and Linux workloads, and currently provides cloud migration, cloud recovery, and cloud dev/test use cases into the AWS Cloud and AWS GovCloud, with plans for other destination environments in the near future.
The continuously updated systems can then be used to permanently or temporarily move any workload into the cloud. It uses a permanent move if you want to migrate a workload from your data center into the cloud. It uses a temporary move in the case of DR. Finally, you can use this same setup to easily replicate production into the cloud for test and development purposes. The only change required for test and dev is to not change the production DNS records to point to the new VMs.
At some point every workload will work exactly where it should work, optimizing cost, performance, or flexibility based on customer’s demand. Products like CloudVelox that allow you to dynamically move workloads between the local data center and the public cloud are the kind of thing that is going to make this happen. CloudVelox takes a lot of the guesswork out of the process by automatically inspecting the type of systems it needs to replicate and then managing the process of replicating the entire configuration including, OS, libraries, binaries, app stack, and app data. Not only does CloudVelox enable data to cloud migration it also ensures that when the application starts in the cloud it has been properly provisioned and will work as expected.