How DR Orchestration Can Improve Success while Lowering Costs

To the organization, all applications are essential, but not all apps are critical. There is a pecking order in terms of what applications need to be back online first. Recovery prioritization is especially crucial during a full-scale disaster where the original data center is completely unavailable. Resources within the recovery site may be constrained, as may the number of personnel available to work on the problem.

In most cases, IT has a sense of what applications need recovery first. The challenge is the extra work involved in extracting just the right components from the protection data set. The lack of understanding application interdependencies makes recovering a specific application more complicated than performing a mass restore of the entire data center. The one-size-fits-all approach leads to a higher spend on data protection, especially data protection infrastructure. It also leads to longer wait-times for mission-critical applications to be fully accessible by users.

Orchestration Before Prioritization

Without orchestration, there is almost no point in trying to prioritize recovery. If the organization, for example, wants to recover a mission-critical application but doesn’t understand all the application’s interdependencies, they may end up recovering a non-functional application. Orchestration enables the organization to recognize and account for the interdependencies before recovery. It also automatically executes a correctly ordered recovery when disaster strikes. Even if IT misses a few interdependencies during planning, testing can capture errors so they can be proactively added to the orchestration engine, avoiding errors during a real-world scenario. Orchestration essentially rewards the IT team for going through the prioritization process.

Deep Prioritization

While most IT professionals can correctly guess the top two or three applications that they need to recover first after a disaster, understanding what subsequent applications IT should recover fourth, fifth and sixth, becomes a bit more difficult to categorize. The difficulty increases as IT gets deeper into the DR recovery process. Orchestration makes deep prioritization easy. IT simply adds each application’s recovery process to the orchestration engine along with its interdependencies. If one of its dependencies is already recovered because it was required by a higher prioritization, it isn’t recovered again. With orchestration, even organizations with hundreds of applications can prioritize the recovery of those applications, based on criticality. Some DR orchestration solutions will even forecast the expected recovery time of each solution based on the total workflow.

Using Deep Prioritization to Lower Costs

One of the key advantages of deep prioritization is its ability to lower disaster recovery costs. Orchestrated prioritization makes it clear that not all applications require recovery within minutes. Modern data protection applications provide multiple ways to recover data at varying costs and times. A few high priority applications may need to use a replication strategy where the data protection solution replicates data to a remote primary storage system so that after a disaster, the solution points applications directly at the remote system and applications are back online.

A large number of applications outside of those considered mission-critical still have rapid recovery requirements, but often lack the resources – like replication and the supporting infrastructure – necessary to meet those expectations. These applications are often best served by leveraging a data protection application’s instant recovery process where the application’s volumes are instantiated on the backup device. That backup device needs to perform at production level expectations, at least temporarily, so it may cost a little more than the typical backup storage device, but the cost of this storage is far less expensive than the typical production storage system.

An even larger number of applications can sustain an outage of a few hours. The typical high-density hard disk-based backup storage device typically works well in these situations. In this case, the DR orchestration tool directs the data protection solution to recover from the hard disk-based backup appliance, directly to the primary storage systems at the disaster recovery site. While slower than the above two recovery methods, it is less costly and thanks to orchestration, the process can still be automated, so IT isn’t staring at technology while recoveries happen. Instead, they can verify that other recoveries are complete, and users are able to access the more critical applications.

Finally, another large set of applications and most file systems structures often can sustain an outage of multiple hours and in some cases days. These recoveries can come from low cost storage options like cloud storage or tape libraries. Normally, the manual effort required in directing these recoveries makes it unlikely that IT would use these storage platforms for this use case but with DR orchestration, they can still automate the process. The cost savings in using a tier 4 class storage area is significant and can dramatically reduce the cost of disaster recovery.

Singularity is Key

Ideally, the organization wants to look for a single solution that can provide each of these recovery types. If the organization has to resort to different vendors for each use case, it becomes very difficult for an orchestration engine to automate the process across all of these various products in a consistent fashion. IT needs to look for data protection vendors that can provide the variety of recovery types they need along with the ability to orchestrate a mixture of those types through its engine.

A key component to successful disaster recovery is testing. It is especially important when leveraging deep prioritization to identify and automate all the various application interdependencies. In our next blog we’ll discuss how DR Orchestration makes DR testing an easy, rewarding experience for IT.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,514 other subscribers
Blog Stats
%d bloggers like this: