How To Design a Hyper-V Disaster Recovery Plan

Server Failure, Storage System Failure and Data Center Failure are all forms of disaster that will impact the Hyper-V environment. Now IT planners should add ransomware to that list. How does the Hyper-V Administrator design a disaster recovery plan?

Disaster Recovery Basics

Most IT professionals think of natural disasters as the primary threat. But events over the recent weeks are proving cyber-attacks may be the greater concern since they can hit any organization, anywhere and at any time. Add to that, the ever present danger of a server failure or storage system failure and it’s easy to see IT has its work cut out.

The first step in any recovery effort is to make sure backup captures a secure copy of data frequently enough so applications or workloads can return to operation quickly to meet the organization’s expectations.

The second step is to make sure that protected data is available in a remote location. For most organizations, preparation of the remote site is done by replicating the backup copy to another data center owned by the organization, a managed service provider or the public cloud. IT should not consider the backup process complete until the remote location has a copy of the data.

The final step is test. The purpose of testing is twofold. First, a test is the ultimate verification that the data at the remote site is valid. Second, a test gives IT the experience it needs to execute the disaster plan flawlessly in the event of an actual disaster.

Recovery At Remote Site

If the organization decides to count on its own remote site for disaster recovery then it needs to first make sure the site is far enough away from the primary site so the same natural disaster will not impact it. Second, the organization needs to have a hardware acquisition plan. Part of this will likely include some standby servers that are on-premises and waiting for the recovery effort to begin. There, more than likely, will also be applications that can wait for servers to be ordered in prior to being recovered. IT needs to make sure it knows the recovery times for each server and it needs to resist the temptation to provide a high level of recovery service to all.

One of the values of Hyper-V is that multiple applications can run on a single server but the organization needs to be careful not to recover so many applications on the physical server that performance during the disaster is unacceptable.

Recovery In The Cloud

The other option is to use a backup application that can replicate data to the cloud. For Hyper-V, Azure is an ideal destination. But Amazon AWS will also work. IT planners need to understand what the data protection application’s cloud capabilities are. Does the data need to be restored out of the backup application’s format into the cloud’s data store? And does the virtual machines need to be converted into something that the cloud’s hypervisor can work with.

Reducing DR RTO/RPO

IT is constantly under pressure to meet increasingly strict recovery time and recovery point objectives. A recovery time objective is essentially the time it takes to enable users to login to an application. RTO windows are narrowed by pre-position data, so less has to be transferred at the point of a disaster or by enabling the backup storage target to also act temporarily as a primary storage device also known as boot from backup.

RPO windows are narrowed by capturing copies of changed data more frequently. More frequent copies require thinner backups so the application is interrupted for a shorter period of time and less data has to transfer over the network. Technologies like source side deduplication, change block backups and replication enable thin and more frequent backups.

StorageSwiss Take

Disaster Recovery for Microsoft Hyper-V requires a backup and replication product that is Hyper-V aware so it can properly interface with the hypervisor and perform frequent backups as well as replicate those backups to an off-site location, either owned by the organization or a cloud provider. The final step is testing to make sure that IT understands DR from an execution standpoint.

Sponsored by NAKIVO


NAKIVO is a US corporation that develops a fast, reliable, and affordable data protection solution for Hyper-V, VMware, and AWS environments. NAKIVO Backup & Replication v7 provides scheduled, image-based, application-aware, and forever-incremental Hyper-V backup and replication. VM backups can be easily copied offsite or to AWS/Azure clouds by backup copy jobs, while VM replication can create and maintain identical copies of source VMs, which can be simply powered on in case of a disaster. Over 10,000 companies are using NAKIVO Backup & Replication to protect and recover their data more efficiently and cost effectively. Visit to learn more.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,231 other followers

Blog Stats
%d bloggers like this: