The Role of Backup in a Disaster Recovery Operation

Almost anything that interrupts users from accessing their applications or their data is, to them, a disaster but for most IT Professionals a true disaster is an event that causes the loss of the entire data center. Disasters of this magnitude are often caused by nature like floods, earthquake or tornadoes but can also be man-made like a cyber-attack or even human error. When a disaster occurs the goal is to bring critical applications online as quickly as possible. Those recoveries are often made possible by leveraging replication applications to get an active version of the data set copied to the DR site as frequently as possible. Most think the backup process has no role to play in a disaster recovery and are surprised to find out it does.

1. When All Else Fails ā€“ Recovery from the Backup Process

The modern data center has access to a wide range of tools to help in the recovery process. Most storage systems can replicate data to another storage system at the DR site. Some can replicate to multiple sites, and a few can replicate to the cloud. Replication, as we described in our entry “Backup, Replication and Snapshots – When to Use Which?” enables data centers to copy near-real-time data as it changes to a remote site.

But as Robert Burns said, “The best-laid plans of mice and men often go awry.” And sometimes IT professionals will get to the secondary site only to find that the replication jobs were not working as well as they thought, report false positives, or more commonly there was a configuration error and some of the needed components of the application were not properly replicated.

This is a key role for the backup process during a disaster, to fill in the gaps caused by either application or human error. For it to fill in these gaps though the backup application needs to be run frequently and has to be set to protect absolutely everything.

2. Restore The Less Critical

In a disaster, at least initially, the organization needs the most critical applications and data (probably the most recently accessed) available as fast as is possible. Again, assuming that the process works, that is the role of replication. At this point in the recovery process IT should save backup resources to support that recovery effort as described above.

In many cases the time spent at the disaster site is short and no other applications or data needs to be recovered. But there are times, especially in the case of a total data center wipe-out, where IT needs to recover less critical applications and data. In most cases it is acceptable to recover this data from the backup process. This means that even backup targets like tape libraries are acceptable sources for the recovery, since the replication process handles the time sensitive restores.

Using the backup software and hardware to handle the less critical recoveries will also reduce the investment in the more expensive replication process. After all, time is money and since replication saves you time it cost more money. Only the most important applications and data should go through it.

3. Protect The New Data Center

The final role of backup in a disaster is to protect the DR data center. Once critical applications start and users begin accessing them, the DR site is now the primary data center. It will store new and changed data that likely won’t be available anywhere else. It is important to protect this data and the backup process is the best place to start. If it looks like the organization will operate out of the DR site for a considerable period of time, the organization may even want to consider restarting replication and copying mission critical to yet another location.

StorageSwiss Take

Despite all the new technology available to make applications available quicker after a disaster, backup continues to play a large role in the process. First, it is the recovery point of last resort in case something goes wrong with the advanced technology. Second, it can limit the amount of data that is protected by the more expensive replication process. Third, it can protect the DR site, which is now the new data center. In other words instead of using new technologies like replication to replace backup those technologies should augment it.

Sponsored by Commvault

Eleven years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,851 other followers

Blog Stats
  • 1,164,712 views
%d bloggers like this: