Overcoming Shortcomings of SaaS Based Data Protection

Most organizations don’t make money off their data protection process; instead they view it as an insurance policy in case something goes wrong. These organizations, however, do make sporadic investments in the data protection infrastructure and these investments consume a considerable portion of the IT budget. Data protection spend often exceeds budget making it an unplanned spend, which means its costs have to be stolen from other IT projects or IT needs to secure additional funding. To circumvent these problems many organizations are turning to services based data protection so that the process can become a more predicable expense.

In addition to wanting data protection to be a more predictable expense, IT professionals are under increased pressure to meet strict service level agreements (SLA). Users and application owners want reduced data loss and faster recovery times. The problem is that most cloud based SaaS solutions don’t enable the organization to meet the new SLA requirements.

Where Cloud Data Protection Falls Short

The primary area where cloud data protection falls short is during recovery. While features like deduplication, compression and changed block backup can reduce the time required to back up data to the cloud, they do little to help with a recovery. In almost every case it requires recovery of ALL of the application’s or workload’s data. Technologies like deduplication and block level incremental need a data baseline available before recovery starts. The need to recover all the data means that recovery can take hours if not days.

As a work around to the full recovery feature, many cloud providers today have disaster recovery as a service (DRaaS) features that enable IT to recover the application in the cloud. They leverage cloud compute and cloud storage for their resources and instantiate the organization’s applications in the cloud providing quick, but not instant, recovery. Most providers claim it takes at least one to four hours to have the application up and running in the cloud. At that point IT needs to resolve any networking issues that may arise after starting the cloud based instance.

Part of the time involved in instantiating the recovered application is the time it takes to convert data from the provider’s backup format to a format that its hypervisor can use. The provider may also need to copy the working data set to a separate, more production quality storage architecture. Both of these transfers take valuable time and complicate the return to on-premises.

A bigger issue arises when it’s time to relocate the application back on-premises. While the application is running in the provider’s cloud, data is changing. When it is time to return to on-premises, it usually requires recovering all of the application’s data back on-premises and then the organization needs to schedule downtime to switch back from the cloud instance and perform a final resync of data. The organization can schedule the move back on-site to occur over the weekend but not all applications can get all their data through this recovery process in a single weekend.

What IT Needs

Data protection as a service makes sense. Smoothing out the expense spikes of the traditional data protection acquisition model has a benefit to both the IT budget and enables IT to cost effectively integrate new applications or workloads into the data protection process without sacrificing protection quality. IT though, also needs solutions that provide alternatives to cloud based recovery as well as waiting for recovery of the entire application and its dataset prior to starting the application. An on-premises appliance acting as a cache helps but gets expensive if the organization has to size it to store a copy of all of its application’s data. If the on-premises appliance is complemented by a streaming recovery model that makes the application or workload available while data is being restored, then the organization alleviates much of the concerns around cloud-only recoveries.

In our next blog, Storage Switzerland will detail how data protection as a service needs to change so that IT can move to an infrastructure-less data protection model without compromising recovery times. In the meantime, watch our on demand webinar “How to Create an Infrastructure-less Backup Strategy” to learn more about the challenges that data protection infrastructure creates and how to overcome those challenges with a SaaS based model.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,542 other subscribers
Blog Stats
%d bloggers like this: