Protection Service Levels – Sometimes OK is OK

In my last blog “The Problem with Gold-Only Data Protection Service Levels” we looked at how to design a gold service level for data protection and what the pros and cons of that approach are. In this entry, we look at “OK” service levels; silver and bronze. Designing a silver or bronze service level is almost a lost art. It not only requires technology and skill to make sure it meets expectations, but it also requires some negotiating capabilities because the assumption is that everyone wants the best. After all who doesn’t want GOLD?

Negotiating People off The Gold Pedestal

The first concept to understand when it comes to convincing users or application owners that they don’t need gold level is that people choose a lower level of service all the time. Right now large beer manufacturers are running ads against craft beer manufacturers, claiming to be making beer for all the people. An even better example is the wireless provider’s running of an ad that explicitly says their network isn’t as good as its competitors, but it is “good enough.”

The second concept is that most of the time IT can move application and user data to lower workloads and no-one notices the difference. The problem is though that IT must tell someone. Our service level object program is an excellent way to communicate how to present the idea of a lower service level.

The third concept is that under duress, most organizations can run, at least for a little while, with only a small fraction of their data. During a full-scale, data center destroying disaster, only must-have applications need recovery, while all the nice to have applications can wait.

What Do Silver and Bronze Service Levels Look Like?

Silver and bronze service levels are different from gold service levels because they try to balance the needs of the application and users against the realities of the budget. Silver level data protection is most typically second-tier applications that the organization uses on a daily basis but can live without for a few hours. Organizations should protect these applications with traditional backup applications, backing them up every four to six hours, depending on the number of transactions they handle.

The backup target should be disk or cloud storage. When recovering these applications, IT should recover them from disk directly to the replaced or repaired server and not use recovery in place technology for reasons articulated in our last blog. Using cloud storage for these applications may be problematic depending on transfer time. If cloud backup is used then a recovery in the cloud is required.

IT should reserve the bronze level of data protection for seldom-used applications or data. Old unstructured data and retired applications most typically fit into this service level. In most cases, an archive process, rather than than the backup process, should manage and protect data in the bronze service level. Too many organizations use backup as a retention mechanism. Archiving data in the bronze service level reduces the cost of the backup infrastructure and makes it more efficient.

Reduce but Don’t Miss

IT needs to be careful to make sure that these applications don’t slip beyond their silver or bronze service levels. Users and application owners are already accepting reduced service, don’t rub salt in the wound by missing the recovery window. The challenge with silver and especially the bronze service level is determining which data belongs in it and how to make sure that current protection jobs are meeting the agreed to service levels.

Meeting and maintaining service levels is a crucial focus of our on demand webinar “Rediscovering the Lost Art of Protection Service Levels” which covers both how to align data protection policies with the ever increasing number of applications in the data center and the ever-increasing capacity requirements of aging data. Watch the webinar and learn when and how to apply “OK” to your data protection process.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,514 other subscribers
Blog Stats
%d bloggers like this: