The data protection process always seems like it is playing catch up to the recovery expectations of production data. Part of the problem is that data protection is often considered after an application has rolled-out in production. The other part of the problem is the recovery bar is rising faster than IT can possibly upgrade all the various components of the recovery process. The final part of the problem is that IT, often because of cost concerns, tries to stretch the backup process to also provide high availability and data archives.
The first step is “future proofing” the data protection process is to develop a three-pronged strategy for making copies of production data. The backup solution should remain the foundational component. All data should be copied by the backup process and stored both on an on-premises secondary storage solution and in an off-premises location like the cloud.
But the backup process should NOT always be the primary recovery method. Priority One are mission critical applications that will need a method to capture data more frequently than backup’s one-per-night. They also need to have a secondary copy of data in more of a ready to go format, so that in the event of a server, storage or even a data center disaster, a new version of the application has easy access to a very recent copy of data. With this solution in-place IT should be able to meet almost all recovery expectations.
The second step is to decide what files the organization will NOT need in a disaster. This data should be segregated and backed up via a different schedule and retention policy. Segregating this data keeps it from bottlenecking the recovery process, ensuring only data the organization needs will be part of the recovery plan. Ideally this data should be placed into a separate archive.
The final step is to assign a recovery expectation or objective to each application and or data set in the environment. That expectation is typically expressed in either the minutes or hours required to recover the data set. It should also express how much data loss (time between protection events) is acceptable. The temptation will be to put many more applications into the Priority One category than is realistic. IT should limit as best as possible the applications or data sets that garner a Priority One ranking. This assignment process is called setting service level objectives for each application or data set.
With these three steps, IT has all the elements it needs to evolve the data protection process over time. IT then uses service level objectives to communicate with the rest of the organization how recovery will work given the three most likely disaster situations (server, storage, site).
To learn more about developing a future proof data protection strategy, join us for our on demand webinar, “How to Future Proof Your Data Protection Process”.