The TCO of Meeting a Backup Window

Backup is a budget area that’s considered an expense, one for which investment is typically minimized. When money is spent it often goes to short-term fixes for the most essential parts of the process, like meeting the backup window. This Band-Aid approach often results in a “sprawl” of data protection hardware that collectively must meet each day’s backup requirements. While these Band-Aids usually do accomplish the goal they can create hidden costs and a high degree of data risk. In this article Storage Switzerland will explore the total cost of meeting the backup window and look at ways to reduce it.

Many companies are not fully aware of what this process of running their backup infrastructures is really taking out of them, in addition to the basic financial aspects. Backups may be getting done on time (just barely), but companies don’t realize what it takes for that to occur. The true costs of their backups in terms of time, risk, and toll on the staff, in addition to the financial costs, are important aspects of managing data protection. Since a cost that’s too high is not sustainable, knowing the total cost of ownership, or TCO, of the backup infrastructure is essential.

The cost of buying more boxes

Too often companies are forced to buy additional backup storage systems because their original systems don’t scale adequately, in terms of storage space or performance. In these situations the company could be buying another box because backups aren’t getting done in time (they need more performance), even if they haven’t yet used up the capacity available in the original system. Or the opposite could be true, they need more storage space but aren’t pushing the performance limits of their existing hardware. In either case, they’re wasting resources. In addition to buying hardware, more backup targets means more software licenses and more support contracts as well.

Data reduction is a big part of the economics that makes disk backup appliances appealing. The deduplication process is a primary component in this data reduction, but it too is affected by the number of backup appliances implemented. In general, several smaller systems can’t offer the same level of deduplication efficiency that one scalable backup system can on the same data set, exacerbating the ‘more boxes’ problem.

Using multiple boxes also results in lower capacity utilization since you can’t completely fill each one before the next system needs to be implemented. There’s always a portion of capacity on each system that goes unused, essentially wasted space.

The cost of running more boxes

Backups may be getting done but, like the proverbial ducks paddling underwater, backup administrators can be scrambling behind the scenes to make that happen. They may be re-directing primary data between systems in order to load balance backup jobs to fit on available storage or rearranging those backup jobs to best utilize the available time window. The admin team also faces more of the day-to-day ‘care and feeding’ chores with a collection of backup appliances since more boxes equals more of everything, from an administration perspective.

Of course, implementing the ‘sprawl strategy’ of storage expansion is also much harder than it would be with a truly scalable backup solution. Installing additional systems can be more complex and time consuming than simply adding capacity to a larger system that can support backup growth. The added complexity can also translate to IT staff stress about missing backup windows or worry about backups failing – or worse, not really knowing the status of all backup jobs at all, without digging deeply into multiple control panels and running multiple reports.

The cost of risk

Another hidden cost of running multiple backup systems is the increased chance for error, as operators are required to spend more time managing the larger collection of backup hardware. One example of this is encryption key management, which gets more complex when multiple systems are involved. Adding hardware is disruptive, so it will typically get put off as long as possible. This means systems are running with less headroom, ‘working without a net’ so to speak, which means everything must go as planned or backup quality can suffer. And when it does, the IT team is again left scrambling to provide a fix.

The Scale-Out advantage

There’s a reason it’s called “backup system sprawl” instead of “distributed backup”. If two systems were better than one, SANs would never have taken off and the storage industry might be touting ways for users to divide workloads up, not combine them with technologies like virtualization. Data center managers add more backup hardware because the systems they’re using can’t expand as large as needed or meet the performance requirements they have, not because they would rather operate multiple targets.

Solutions like Sepaton’s S2100* family of backup storage systems enables enterprises to expand their backup capacity and stay ahead of data growth without affecting performance. They do this by allowing the system to increase storage performance independently from capacity, by adding processing nodes as well as adding disk shelves. This way companies can leverage the benefits of a scale-out system but maximize the storage capacity used, which in turn, minimizes the capacity that needs to be purchased, installed, tuned and maintained. They can also maintain (or actually improve) deduplication effectiveness as data sets grow, since these systems can perform global deduplication across all data.

Simply scaling a single system allows admins to maintain a more reasonable amount of headroom, instead of the common practice of pushing smaller systems to the ‘redline’ before bringing in the next box and starting the process over again. When more storage needs to be added, systems like Sepaton bring that new capacity online, automatically configuring the RAID levels, stripe sizes and cache settings to maintain system optimization.

Comprehensive dashboard reporting gives admins system-wide monitoring of day-to-day operations across all backup appliances. They get visibility into detailed reports about backup completion, deduplication efficiency, capacity growth and replication status. As a result, admins can manage more terabytes or petabytes of data per person and have the peace of mind knowing they’re protected. This improves employee efficiency which means lower overhead, but also less frustration, better job satisfaction and decreased turnover. It also reduces the risk of human error by eliminating the need to divide backups onto multiple machines and to remember which backup target is assigned to which machine.

Summary

Running multiple backup systems is not the preferred method to accommodate data growth. It’s a workaround when environments grow beyond the capacity of their existing backup infrastructures and those systems won’t scale. As is usually the case, this workaround has costs associated with it, in terms of increased hardware and software, increased management overhead and more risk. It’s really a numbers game, fewer systems means fewer components to break, wear out and replace, less cost and less disruption.

Backup system sprawl creates a TCO problem, one that must be fully understood before it can be addressed. These include the costs to buy, implement and run more systems, plus the strain on the IT staff that has to make an increasingly inefficient infrastructure work. With this understanding the value of a system like Sepaton’s is clear, as the foundation for a solution that can scale to accommodate the kinds of data growth enterprises are seeing without taking a toll on the IT staff running it.

* S2100 is a registered trademark of Sepaton, Inc.

Sepaton is a client of Storage Switzerland

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,226 other followers

Blog Stats
  • 1,675,220 views
%d bloggers like this: