Do You Understand Your Disaster Recovery Buckets?

Disaster Recovery is a complex and fragile process. That is why it is important for IT to simplify that process as much as possible. An area where we are seeing success is how you group applications. While each application will have different recovery expectations, generally speaking they fall into three broader categories, or what we call “Recovery Buckets.” Once you group the applications in these buckets, you are able to apply the right level of protection to each type.

What Makes DR So Complex?

The complexity and fragility of disaster recovery comes from the fact that data protection process has to interact with every server, every virtual machine, every storage system and almost every network connection. That’s complexity. Any change to the environment requires a change to the backup process. That’s fragility. Our bucket process reduces complexity while increasing flexibility.

Recovery Bucket 3

Click To Register

When dealing with a complex situation the best first course of action is to eliminate as many variables as possible. The simplest DR variables to get rid of are the servers you just won’t need, at least not right away, in the event of a disaster. This is the stuff that goes into Recovery Bucket 3.

Even with all the talk about stricter recovery point objectives and recovery time objectives, most of the applications within an environment should fall into this bucket. These are servers, VMs, and applications the organization uses but in the event of a failure are not so important that the organization can’t operate. Hours or even days of downtime for these systems are acceptable. These servers can be adequately protected via a traditional backup application to a cost effective backup device, either high-capacity, deduplicated disk, the cloud or even tape. For most organizations, the majority of the server population should fall into this category, eliminating a high number of variables.

Recovery Bucket 2

This is the bucket that is experiencing the most growth. And most of that growth is not coming from servers moving into this bucket from another bucket. It is coming from new servers or VMs that are powering the app explosion we see many organizations experiencing. While these servers are not mission critical they are business important. They need to be back in operation within an hour or so and in some cases less than 15 minutes.

The good news is that Recovery Bucket 2 is also seeing the most innovation in terms of tools to execute these narrow recovery windows, but still remain cost effective. Today, organizations can choose between backup applications with recovery in place technology or software based replication utilities that can replicate from anything to anything.

Recovery Bucket 1

The final category is reserved for mission critical servers that need zero or near zero recovery times. These servers need two types of protection, either software-based replication described above or hardware-based synchronous mirroring. Which one to use basically depends on just how close to zero your RPO and RTO needs to be.

If the RPO/RTO are in the sub-5 minute range then synchronous mirroring is the way to go, but it has its challenges. First, it is expensive. Second, it is limited on how far away from the primary site it can be. If the RPO/RTO can stretch to even 5 minutes, software based replication should do the trick. It is a lot less expensive and administrators can place it further away from the primary data center.

StorageSwiss Take

By putting your various servers, VMs and applications into each one of these buckets and then applying the appropriate data protection strategy for each, the data protection process is greatly simplified without making it too simple. It gives IT a way forward toward a future proofed data protection strategy.

To learn more about the recovery buckets and how to apply service levels within your organization join my colleague W. Curtis Preston for today’s webinar, “How to “Future Proof” Data Protection for Organizational Resilience“.

Eleven years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,860 other followers

Blog Stats
  • 1,165,639 views
%d bloggers like this: