One of the most alarming trends in data protection is the number of different solutions data centers are deploying and managing so they can keep up with the organization’s recovery expectations and budget realities. The demands for rapid recovery are leading organizations to deploy separate solutions for each platform it supports, just to get one particular feature. It also leads them to deploy unique hardware solutions in an attempt to drive down costs. The result is data protection sprawl.
Instant – The New RTO
The biggest challenge facing IT is the demand for near-instant recovery. To meet the instant recovery demand, IT professionals leverage storage system snapshots, application replication solutions and backup products. All of these methods have their role, but the problem is the need to operate and manage each one independently from each other. Additionally, there is a lot of redundancy between the solutions, as the data protection process doesn’t cascade between them. The result is IT professionals have multiple copies of data (and many times the same data) at different points in time, with no good system to determine which one is the best version from which to recover, given the recovery scenario.
To make matters worse, the sprawl of storage systems and operating environments leads to each using a flavor of these three data protection methods. Each storage system has its own snapshot capability with its own interface and scripts, as does each operating system/environment. Then, each environment often gets its own backup application, and each application requires a separate storage system, again with unique interfaces. Very quickly, the data center is managing dozens of combinations of techniques to meet its data protection requirements. Recovery becomes a fire drill where IT has to pull off miracle recoveries instead of taking an organized approach to the process, which instills organizational confidence in the process.
Bringing Order to Chaos
Enterprise-class data protection software has to evolve to keep up with the new instant RTO challenge. First, it needs to leverage and manage the existing processes. For example, backup software should work with snapshots by triggering them as part of its data protection schedule and presenting them to the backup application for protection. This integration should include the ability to determine the best available data copy from which to recover. The application should use the most recent, viable snapshot. Additionally, the backup application should leverage its online backup capabilities to enable the snapshot to capture application consistent snapshots instead of crash consistent snapshots.
The next step is consolidating data protection hardware to drive down the overall cost of deployment. There are two components to data protection hardware; the servers running the backup software and the hardware that stores the protected copies of data. Data protection servers have become increasingly taxed in recent years as organizations expect more from the backup process, which now includes copy data management, instant recoveries and advanced search. Data protection storage needs to be increasingly scalable and even more capable. The systems need to sustain the overhead of deduplication and compression while at the same time delivering acceptable performance in live recovery situations.
Software Defined Data Protection
A viable solution is for backup software to implement its own storage software capabilities specifically designed for data protection and integrating all of the features into the solution, including running the data protection software itself. The result is a converged data protection infrastructure, a single data protection cluster (a collection of servers networked together all sharing resources) that can host all protection and secondary storage operations while scaling to meet future demands.
IT needs to be careful not to be enticed by every new data protection software and hardware solution that becomes available. Obviously, there will be times that it needs to implement a point solution, at least temporarily, to meet a specific need but generally it should push everything to a more centralized enterprise-class solution that can deliver a consolidated foundation for both hardware and software.