Disaster Recovery Planning Getting From Bad to Good

Disaster Recovery, at its most basic, is making a copy of data and securing that copy off-site. Unfortunately, it is details like how long can the organization afford to be without the application and how much data can the organization afford to lose that makes disaster recovery so complex. These requirements and the time it takes to design an architecture and execute a plan to meet them is why most disaster recovery plans go bad.

Many organizations hoped the cloud would solve disaster recovery challenges. But, cloud infrastructure, just like any other infrastructure needs to be surrounded by the right products and people to make it work. While many products claim to provide cloud integration, the level of integration varies between vendors as does the level of support the organization will receive to ensure the cloud is meeting its needs. It is important to realize that products are just part of the solution. A DR plan, one that will work under the duress of an actual disaster, requires appropriate facilities, services, and people to execute the plan when it is needed most.

Step 1 – DR Getting From Bad to Good

Most disaster recovery plans are not plans at all. They are a series of assumptions the IT staff will make about who on the team will recover what and how, if disaster strikes. One of the most critical aspects of a successful disaster recovery plan is that the data is moved off-site securely and consistently.

It is safe to assume that most organizations are using some sort of backup product to make a copy of data to secondary storage devices on-premises. The problem in the disk-based data protection era is how and where to move that data for off-site protection. Again, most backup software and backup hardware have the ability to replicate to a remote site, but that assumes the organization has a second site, which is both geographically far enough away and is staffed with IT personnel, to which they can send the data. Most mid-market businesses, many VARs, and even managed service providers lack these qualities in their second site, and as a result, offsite data movement becomes a hodgepodge of tasks run by multiple individuals.

The cloud, be it a cloud purpose-built for backup storage or a public cloud with multiple use cases, is an ideal answer for mid-market organizations. The cloud enables these organizations to copy automatically their latest backups off-site to a remote location, staffed with IT professionals, without having to pay for the facility. The consistent movement of data off-premises is an important first step in disaster recovery planning.

The reality is that most organizations today are still not using the cloud in conjunction with their data protection solution. Veeam, for example, reports that less than 10% of its users are using its Cloud Connect product which enables a Veeam customer to leverage cloud storage resources. There are three considerations when deciding to use the cloud for off-site storage. First, how seamlessly will the existing software or prospective new software replicate data to the cloud? Is the capability built in or is it using some form of external gateway? Typically, the more seamless the connection, the faster and more reliable the data transfer.

When trying to decide on a storage location for the disaster recovery copy of backups, the data center will face a choice between large public cloud providers and service providers with smaller purpose-built data centers. It is important to understand that public cloud providers provide infrastructure and only infrastructure. They do not provide support for specific products nor do they integrate the products and services together into a solution.

Organizations that choose public cloud need to make sure that they have the internal capabilities required to drive the software and infrastructure to a solution and an important factor to consider is, will those capabilities exist even during a disaster? The public cloud offers an easy startup cost but extra costs can add up. Users have to pay extra for longer retention, multi-region DR copies as well as support and service.

For organizations that don’t think they will have those capabilities, either before or during a disaster, a purpose-built cloud solution is more practical. These providers are focused primarily on delivering just the services they offer and they are in full control of their infrastructure.

Part of the decision process in selecting a purpose-built provider of disaster recovery services is what software application(s) they will support. Many providers force the organization to use their backup application rather than an existing application that is already in use by the organization. Even if the customer is considering switching data protection solutions, the requirement to use a very specific application is another point of concern.

Most providers that are delivering their own backup solution are actually “white labeling” an application created by another developer. They have no more control over the code than if they partnered with one of the more established data protection vendors. Whether the provider developed their software internally or not, in most cases the software will have a microscopic market share, which means very limited flexibility in moving to another provider or bringing the service back in if the organization decides to. It is a very smart lock-in strategy.

Organizations choosing a purpose-built backup solution should also look for a provider that can support a variety of backup solutions. Even if the organization is considering an application change, having the flexibility to choose between several industry-standard applications while being able to count on the provider for cloud services is ideal. In fact, the provider should be able to help guide the organization through the selection process, advising them on the cloud capabilities of any of their candidates.

These first two considerations enable the organization to get from bad to good. It enables them to get a copy of data securely and consistently stored off-site. The third consideration is, will the software and the provider be in a position to provide advanced services like DRaaS? The proper execution of cloud-based recovery is what moves the organization’s DR capabilities from good to great. In our next entry we’ll discuss how to get from good to great.

Sponsored by KeepItSafe

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: