The Impact of Unpredictable Data Protection

Most production IT projects are planned months, if not years, in advance of deployment. However, advanced planning of infrastructure upgrades for the data protection architecture seldom occurs. The problem is that updates to the backup infrastructure can often be expensive and because they are not budgeted, the organization is forced to scramble to allocate funds toward the project. Reallocation of funds means that other projects are delayed or not rolled out to their original scale. The lack of planning also forces the organization to pay extra by not spending the appropriate time finding more cost-effective solutions.

The lack of predictable data protection spend cycles means that sometimes funds just can’t be allocated and IT is forced to cobble a solution (workaround) together to provide some means of protection. Finding funds is especially problematic when IT budgets are flat or declining. While creative, workarounds increase risk. The workaround may not provide frequent enough protection nor provide rapid enough recovery.

There is also a ripple effect of stretching the data protection process to protect a new application or environment in that it may leave incumbent environments more exposed to data loss or slower than originally promised recovery times.

The lack of budget planning also forces organizations to continue to use legacy data protection platforms that don’t have advanced abilities regarding backup performance, rapid recovery, cloud support, and long-term data protection. These legacy platforms often require dedicated administration which increases the total cost of ownership.

Even if the organization decides to switch data protection platforms, it may not be able to since costs typically increase temporarily as the new solution is brought in and the old solution is phased out. For a time, the organization needs to run both solutions, and during that time, costs are effectively doubled.

In addition to increasing requirements like more frequent backups, more rapid recovery and increased data retention, the unprecedented growth of production data forces organizations to add additional secondary storage capacity. Secondary storage is typically bought dozens (if not hundreds) of terabytes at a time, which means a significant upfront cost when the current secondary storage system runs out of capacity.


While data protection planning and budgeting are essential, the reality is that most organizations won’t or can’t plan for “the spend.” To be fair, they do the best they can, planning and budgeting for production spending. A new method of acquiring IT solutions is required, but it has to be a solution that doesn’t force IT to move everything to the cloud.

Consumption-based IT is the answer. It enables IT to keep its resources on-premises, but the organization only pays for the resources that are actually in use. It eliminates the high costs and low utilization associated with upgrades. Finally, it provides active capacity management that regularly tracks IT resource utilization and adjusts IT spending accordingly.

In our next blog, “Why Data Protection as a Service is Unpredictable,” we discuss that while the cloud helps with the predictability problem, it isn’t the total answer. In the meantime, learn more about consumption-based IT and how it enables a more predictable data protection infrastructure. Watch our on demand webinar, “Consumption-Based Data Management Providing Peace of Mind.”

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: