Thanks primarily to cloud based services users have been exposed to data centers that are able to offer at least the perception of applications that never go down and data that is never deleted. A simple Google search will reveal that most cloud providers have suffered multiple outages in the past year, the difference is they are able to respond to an outage rapidly so that only a fraction of their users are impacted. However, that perception is now being articulated in the demand for always-on application and always-available data.
Veeam’s recent Data Center Availability Report shows that a staggering 93% of data centers worldwide are facing increased demand for an always-on application experience. That same report indicated that an equally staggering 92% are facing an increased demand for data availability. These demands are for a variety of reasons including: the demise of timezones and the 9 to 5 workday, the requirement for online customer engagement and the rise of the internet of things where devices are constantly sending data to the data center.
The question is whether the data protection process can be designed or re-designed to meet this challenge. Again, the Veeam report indicates that the current design is falling short. World-wide, less than 46% of mission-critical workloads can meet their current recovery point and recovery time objectives. What can be done to close this gap?
Closing the Availability Gap
The first step in closing the availability gap is to virtualize aggressively, more aggressively than in the past. A key attribute (maybe the most important attribute) to virtualization is portability. An application can be moved very easily between servers if that application is virtualized. For workloads that can’t be virtualized, look for physical-to-virtual solutions that can leverage virtualization as a failover point.
The second step is more frequent protection events. Once-a-night backups are no longer enough. Data changes too frequently for these to be practical. But at the same time data can’t be backed up “full” throughout the day. Look for solutions that provide a sub-file or block level of data protection, like change block tracking (CBT) or block-level incremental backup. This capability allows backups to occur quickly, enabling repeated backups throughout the day.
The third step is recovery. To meet the always-on demand there are a variety of solutions available, including storage or software-based replication. These solutions are fine for the most critical workloads in the environment but are often too expensive for other important, but not necessarily critical, workloads. For this situation look for a data protection solution that can provide an in-place recovery of data, meaning that data can be presented to the application directly from the backup device. This saves the time required to transfer the data from backup storage to primary storage, potentially hours of recovery time.
The final step is retention. Data has to be kept for a much longer timeframe than it did in the past, and it has to be accessible. This, in most cases, does not mean instantly accessible, so a more cost effective storage platform, like the cloud or tape, is perfectly acceptable.
The Veeam report shows that the demands on data protection are forcing that it evolve into an application and data availability process. In other words, it needs to take a more holistic approach that addresses the demands of ever tightening RPO and RTO expectations. Solutions like Veeam enable data centers to meet these expectations head on.
To learn more about RPO and RTO, read our article “Backup Basics: “What do SLO, RPO, RTO, VRO and GRO Mean?”.