Calculating the ROI of Data Protection Service Level Objective

Most data centers try to create a “best efforts” data protection strategy which treats all data and applications equally. Typically the organization uses one backup application, a single backup target device and replicates all data to a secondary site or the cloud, for disaster recovery. The problem with this approach is that if the organization wants to improve the speed of protecting or recovering a particular application or data set, IT needs to upgrade the entire data protection infrastructure, not just a component of it. Considering that users and application owners are continually asking for improvements to data protection, the upgrade everything approach gets expensive and never really delivers a return on the investment.

A Service Level Objective strategy works by first admitting that not all data and applications are created equal. It also makes the safe assumption that only a minimal number of the total number of applications is critical to the enterprise. The reality is that the recovery of most apps and data sets can wait a little while. The other fact is that most essential restorations need data from the most recent set of backups, not from backups that are months or years old.

IF these facts apply to the organization, then IT can move forward to creating a reliable data protection strategy that provides the most cost-effective protection for each type of protection need. The first cost savings is the use of a high-capacity disk appliance, object storage or tape to store backup data no longer considered recent. If the strategy is to use backup for retention, then the high-capacity solution should have a history of providing excellent data retention with minimal data loss. Object Storage and tape provide a considerably less expensive storage platform and have an excellent reputation for retaining data.

The cloud, especially with the archive options available from many cloud providers, is also an option. The cloud storage advantage is low upfront costs. A customer only has to pay for the Terabytes they need and not incur the full cost of a secondary storage system. The cloud challenge is that its rental fees add up over time. Cloud providers may also charge an egress fee for any data removed from there.

It is important to note that ALL data should end up on the secondary storage tier, even data from mission-critical systems. IT needs to demand that backup software vendors add tiering capabilities to their backup solutions so that older backups can automatically migrate to these less expensive tiers.

The infrastructure should then add a higher performing storage system for recovering business-important applications, applications that fall into the silver tier. Most backup solutions have a feature that enables the instantiation of an application’s data store directly on backup storage, which saves the transfer time across the network.

The backup software still needs to extract the VM’s data from the backup data set and “make it ready” for instantiation. The time required to perform these steps means that this boot from the backup device is ideal for mission-critical workloads that can only have a few minutes of downtime, but for the vast majority of application, the boot from backup device capability provides excellent restore performance to a wide range of applications and data sets.

The ability to migrate old backup jobs after just a few days means the device chosen for this middle tier of recovery only needs to store a full backup plus a few incremental backups.

The final area is the mission-critical systems. These solutions need a prepositioned copy of data stored on a storage system that can stand-in for the primary storage system in the event of a failure. The stand-in storage system needs to offer performance similar to the primary, since it may very well become the primary at some point. Replication is the primary method organizations use to pre-populate the secondary system. Many backup solutions have a built-in replication capability. There are also several stand-alone replication solutions that typically provide both version retention and multi-cloud support for disaster recovery.

The advantage of keeping as many applications as possible in the bronze or silver level is the organization can afford to invest in the gold level to achieve the desired results. Again, the gold tier only needs to keep the most recent versions of the application. IT should continue to use the backup application to populate the lower tiers with this application’s data.

A Service Level Objective enables the organization to lower costs by making sure the third tier uses either object storage, cloud storage or tape storage for retention, driving most of the cost out of the data protection process. The boot from backup capabilities common in most backup applications also lowers cost by providing acceptable recovery times for business important applications without having to move them into the gold tier of protection. These cost savings enable the organization to invest appropriately in the gold level so that it meets the demands of application owners and users.

With the multi-tier infrastructure in place, the organization can move applications up and even down in protection levels without having to upgrade or change out data protection equipment regularly.

Meeting and maintaining service levels is a crucial focus of our on demand webinar “Rediscovering the Lost Art of Protection Service Levels” which covers both how to align data protection policies with the ever increasing number of applications in the data center and the ever-increasing capacity requirements of aging data.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: