“It’s not about backup, it’s about recovery” is a common statement from the marketing departments of backup software suppliers. A recovery can’t be performed without a quality copy of the data. Cloud storage’s low cost and automatic off-site capabilities as well as the potential to recover in the cloud, appeals to IT planners looking to create a “great” disaster recovery plan. Getting data to the cloud, however, remains a critical first step that can’t be overlooked.
Backing Up to the Cloud
WAN bandwidth is the limiting factor in copying or moving data to the cloud. Most data protection solutions overcome bandwidth limitations by using either block level incremental backups or data replication combined with compression and deduplication.
These technologies only work once the original backup set is on the provider’s storage. The first step is of course creating that initial backup copy, which is probably stored on-premises and then synced to the cloud. In most cases, the organization already has this copy created through their backup software but that copy only has value if the provider supports that organization’s data protection solution.
If the cloud backup services or cloud DR services forces the organization into using a new backup application and recreating the foundational copy, recreating and storing the foundational copy, even on-premises, will consume a lot of time for a larger organization. It may also require time consuming troubleshooting to make sure every application is backing up correctly.
The next step is to get the foundational copy to the cloud provider. Most providers don’t give organizations an easy way to seed that first copy. Usually the organization must brute force copy the original data set to the cloud, which is significantly more time consuming than creating the on-premises copy. A key consideration for organizations to ask potential cloud providers is how to create and seed that first copy.
If there is a way to export that first copy externally to the provider then the organization also needs to make sure that the backup solution will recognize it and only send updates to the cloud provider from that point forward.
Finally the cloud software and provider needs to provide a mechanism for ensuring that data in the cloud matches the on-premises copy. The acid test for data integrity is recovering from the cloud copy. An advantage of cloud disaster recovery is testing recoveries should be so quick and inexpensive that IT can test recoveries continually.
The first step in establishing a “great” disaster recovery plan is getting all the organization’s data to the cloud. The first part of this step is making sure that the foundational copy of data, which technologies like deduplication leverage, is quickly sent to the cloud.
The next step in a “great” DR plan is making sure that when disaster strikes the disaster recovery site meets the organization’s recovery time and recovery point objectives. It also needs to deliver enough performance to recovered applications to sustain operations until the disaster passes.
We will discuss cloud DR verification in our next blog. In the meantime, watch our on demand webinar “How to Create a Great Disaster Recovery Plan“.