Is Your Data Protection Architecture Holding You Back?

In most organizations the data protection architecture has gone unchanged for almost a decade. It’s true that certain components within the architecture change because of technological advances or forced refreshes but the basic design remains the same. The challenge with an unchanging data protection architecture is that the data center, as well as user expectations, it was designed to protect is changing more rapidly than ever.

Unfortunately, many organizations think of data protection as an afterthought. In most cases an application or operating environment goes through test phase and even potentially early production without the data protection requirements being considered. The assumption is that the current backup architecture will cover it. It may or it may not. It may not be easily expandable or it may not support the new application being deployed. Because data protection is treated like a second-class citizen budget for expansion is not calculated into the cost of rolling out the application. The result is aging data protection infrastructures that are duct taped together to make some attempt at protecting the application. In the end data is a risk as is the organization’s reputation.

Why Data Protection Architectures Fall Short

The first challenge facing data protection architectures is that both the software and the hardware are scale-up in nature, yet the rest of the data center is rapidly moving toward a scale-out design. The problem is most obvious with backup hardware that is scale-up. Production data sets are growing rapidly and backup storage typically requires 10X or more the capacity of production storage. That means for every terabyte added to production storage, the organization has to factor in ten terabytes of backup storage.

Additionally, the 10X factor is also increasing because IT is increasingly required, due to compliance regulations or internal organization requests, to retain data for longer periods. These companies are also looking to use their backup storage for more use cases, like archive and light production.

Backup hardware needs to be truly scale-out in nature, so that when capacity or performance requirements are reached, a node is added and backup jobs automatically start using the newly available space.

The good news is, many backup hardware solutions have built-in capabilities like deduplication, compression and replication. The problem is, most backup software applications have limited capabilities to work with those features. In fact, in many cases they duplicate the same features within their software, which makes either the software or the hardware much less efficient.

The duplication of features also makes monitoring backup completion difficult. For example, the backup software may consider the data protected after its backup jobs are complete but the organization may not (and should not) consider the backup jobs complete until the data is replicated to a secondary site.

Backup software and hardware need to work together to make sure that meeting service level agreements (SLA) are the focus of protection, not completing a series of jobs.

Another challenge facing today’s data protection architectures is dealing with rapid recovery. Many modern software solutions offer some form of rapid recovery that either enables the backup storage to serve up data directly to a physical server or virtual machine. The problem is that the underlying storage often delivers very poor performance to the point that the so-called “instant” recovery is useless. Rapid recovery features mean backup storage becomes production storage, at least temporarily. Data protection hardware needs rearchitecting so it can provide close to production performance while still remaining in the protection storage price band.

Rapid recovery isn’t the only challenge. Disaster recovery is more critical than ever. No longer are disasters limited to the occasional natural disaster, every organization is at risk of some sort of cyber-attack or malicious intentions of a disgruntled employee. Modern solutions need to have the ability to not only recover rapidly from an on-premises failure but also execute a disaster recovery either at a second site or in the cloud.

A final challenge with legacy protection architectures is a lack of cloud functionality both in supporting the cloud as a secondary storage target and in protecting cloud based applications like Office 365 and Gsuite. If the legacy architecture supports cloud at all it often uses the cloud as a tape replacement, meaning that 100% of the data stays on site and 100% of the data is in the cloud. All this technique does is double storage costs.

An increasing number of organizations are using the cloud for more routine functions like office productivity applications, email, file-sharing, Software as a Service (SaaS) applications like Office 365 and more. These applications need protection as if they were on-premises and most services don’t provide business-class protection. The problem is SaaS happened long after the data protection infrastructure was implemented and the ability to protect that data is limited.

The data protection infrastructure needs to leverage cloud storage and protect SaaS based applications. Cloud storage should be intelligently used so that older backups can stage to the appropriate cloud tier and so the organization can leverage cloud compute to recover in the event of a disaster. The software component of the architecture should also protect SaaS based applications and enable recovery both on-premises and in the cloud.

The challenges facing legacy architectures have led to a new generation of data protection solutions. Vendors are delivering solutions that integrate backup software and hardware. In some cases those designs are scale-out and offer some form of cloud support. However, before IT jumps into these solutions, they need to understand the strengths and weaknesses of each, which is the subject of our next blog.

Sponsored by StorageCraft

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,181 views
%d bloggers like this: