Testing the organization’s ability to recover from a disaster is almost as bad as having to go through an actual disaster. As a result, most organization’s don’t have disaster recovery testing as part of its standard workflow. The lack of integrating DR testing into the workflow means testing is a much more arduous process than it needs to be. IT typically views DR testing as difficult and disruptive, making integration into the day-to-day workflow impossible. IT needs to drive out the complexity surrounding DR testing making possible monthly, weekly or even daily DR tests.
In this ChalkTalk Video, we outline how to design the data protection infrastructure, so DR testing is less complicated enabling it to be integrated into the IT workflow and performed more frequently.
Driving out complexity starts with proper data protection architecture design and software selection. As we discuss in the video, many technologies can create a near real-time copy of data on an alternate system. How real-time that data capture is impacts costs. In almost every case, however, there are a series of steps that need to occur before the secondary copy is brought online like manage network IP addresses, preparing data for use (reindex or replay transaction logs) and re-mapping users to the secondary copy.
We discuss the concept of DR Testing and how to make it easy enough that you’ll actually do it in our on demand webinar, “How to Create a Disaster Recovery (DR) Plan that Actually Works“.