The Impact of Application Sprawl on Protection Service Levels

In our upcoming webinar “Application Explosion – Rediscovering the Lost Art of Service Levels” we will discuss how IT can successfully continue to set service levels in environments where the number of applications they need to support is growing every day. Faced with unprecedented application growth many IT organizations feel forced to give up on setting services levels for each application and switch to a “best efforts” approach. In this column we discuss the challenges with a “best efforts” approach to data protection but be sure to register for the webinar to learn how to set service levels no matter how many applications the organization supports and how the use of service levels can actually drive down costs.

Data protection service levels are the way IT communicates its ability to protect, retain and restore the organization’s data. IT should monitor these services levels to ensure that changes in the applications, the data center or in user expectations doesn’t require a reciprocal change to the service levels. The problem is that as organizations modernize they are experiencing a rapid increase in the number of applications. These new applications may not be quite as critical to the organization as the legacy applications but they are still important and require protection, which means they should also have appropriate service levels attached to them.

IT has enough of a challenge monitoring and adjusting service levels for the current handful of legacy workloads, how is it supposed to maintain a similar insight into all of the new applications? In some organizations the number of new applications can range in the hundreds. IT can’t possibly expect to track service levels for all of these applications. IT can not give up though, it needs to find a way to protect these applications and communicate that protection to the organization.

The “Best Efforts” Problem

As the number of applications increases beyond what the IT staff can maintain, the most common “solution” is to provide a best efforts protection strategy, where all applications are treated the same and offered the same service levels. Essentially IT is promising to recover all applications as quickly as it can with as little data loss as possible. The problem is that each of these applications actually do have different requirements in terms of recovery, some require recovery in a few hours other can wait a day.

Applying the same protection level to all applications means that some applications will never be recovered fast enough and others too fast. In most cases IT will over invest in the data protection infrastructure to provide the best possible protection and recovery windows. The reality is though that only a small percentage of the applications need to meet tight backup and recovery windows. The result is IT overspends on the data protection architecture and still doesn’t meet the needs of the most critical applications.

Setting Service Levels for Application Sprawl

If IT had the time, administrators could inspect and monitor backup logs and test recovery times of each application, every day to make sure its specific service levels were met. While the backup software solution has all the data required to verify service level adherence, administrators do not have the time.

It makes sense then that the backup application, after IT has set the initial service level, should provide a service level view into backup success and completion, instead of the more common job level view. Correlation to the actual service level is almost non-existent in today’s backup applications, but those that can provide a service level view give IT a tremendous asset in ensuring an organization’s applications are protected. More advanced protection applications should mine the metadata the backup application creates to automatically make adjustments to how, when and where applications are protected to ensure data protection. The software proactively manages the process without IT intervention.

IT will still need to go through the initial interview process with application stakeholders but once they agree to protection and recovery windows, the backup software can ensure adherence to those agreed to service levels. The backup software should also provide an early warning system of future inability to meet service levels so they can be reset or so IT can order and implement additional protection infrastructure.

StorageSwiss Take

To learn more about maintaining protection service levels in data centers with unprecedented application growth, join Storage Switzerland and Micro Focus for our live webinar “Application Explosion – Rediscovering the Lost Art of Service Levels” to learn how to continue to maintain a service level driven data protection strategy no matter how many applications the data center needs to support.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,955 other followers

Blog Stats
  • 1,347,620 views
%d bloggers like this: