Most organizations have very low confidence in their ability to consistently recover data, whether from a minor outage or a major disaster. They’ve been through too many instances where data could not be recovered, took too long to be recovered, or even when recovery worked it felt like it was more a function of sheer luck than it was expected. Improving recovery confidence is a critical step in data center modernization.
Confidence Comes from Practice
The easiest way to improve the organization’s confidence in IT’s ability to recover data on-time, every time, is to constantly practice recovery of critical systems. The problem is that practice is time and resource consuming. First, data has to be restored, typically across a network, onto a new server with the appropriate amount of storage. Then the administrator has to isolate the application being tested from the network and start it up. Certainly this process is doable, but doing it frequently, given the busyness of the typical IT administrator it’s not likely to occur frequently.
Recovery in Place is Required
Potentially the most important capability when trying to improve the organization’s confidence in recovery is the ability to recover directly from backup storage, sometimes called recovery in place or boot from backup. This capability eliminates much of the above logistics. No data needs to be transferred, no extra storage needs to be purchased. Essentially all that needs to occur is a virtual machine is started on the host and pointed at a virtual volume that the backup solution creates.
The Complication of Recovery In Place
Recovery in Place does create some complications that need to be addressed. First, it can create a bit of data protection sprawl. Obviously the organization will need backup software that supports the feature and hardware to host the virtual image.
The storage becomes more complicated because most data protection storage platforms are not designed to provide anything close to production level performance. Instead, they are built to store as much data as possible at as low a cost as possible. For many data centers the performance limitations of their data protection storage systems forces them to add an additional storage system to host the recovery in place efforts.
Keeping Recovery in Place Simple
To keep recovery in place simple and thereby ensure its use, organizations need to look for solutions that consolidate the feature as much as possible. Integration of a single storage system that can provide both cost effective storage of backups and performance oriented recovery in place is key. If that solution can then also integrate the data protection software itself then the process becomes simpler still.
Still Room For Archive
Despite all this consolidation there is still room for an archive process. An archive tier will reduce the amount of primary storage that needs constant protection by as much as 80%. Focusing on what really matters will also contribute to improved recovery confidence. In addition, if the archive tier can work with the data protection solution, the data protection tier can slow its own growth, sending backups to the archive tier.
Recovery confidence comes from the result of practice and focusing on the data that really matters during a disaster. Consolidation of the data protection solutions helps. But archive should still remain a separate process and be one that works tightly with data protection. With that foundation in place, IT can leverage capabilities like recovery in place to increase their test iterations, which gives them the experience they need to properly execute a recovery when it matters most.