Current Data Protection Infrastructure Is Broken

Some of the more sizeable shifts in the data center include the move from Mainframes to Open Systems and network computing, the move to client-server computing, and the move to a virtualized server infrastructure. Each of these shifts spawned several new data protection companies, each focused on protecting a new component of the data center that incumbents could not. They were unable to protect more than their specific use case, which forced IT to adopt new vendors while also keeping old vendors to protect traditional environments. After following this model for two decades, the data center needs a new way to converge data protection as much as is possible.

Since data protection is a core data center function, it cannot wait for legacy vendors to support a new environment. Instead, IT has to move quickly and select the most viable solution to meet its protection Service Level Agreements. Today, it’s not uncommon for an organization to have five or more data protection solutions running in a single data center. Each solution protects a particular use case, and, in many instances, each solution has its own data protection hardware. The result is not only rapidly rising costs and complexity but also the risk of exposure due to this siloed approach.

The High Cost of Ignoring the Cloud

Many of today’s data protection solutions ignore the potential of the cloud. If they do support the cloud, they are often on-premises backup solutions that only use the cloud for a redundant copy of data. Cloud functionality can and should be so much more. First, the data protection solutions should leverage the cloud for more than just storing DR copies of protected data. It should also use the cloud for storage of old backups. Most organizations only need the most recent copies of data on-premises, and storing older backups in the cloud reduces the cost and physical footprint of on-premises secondary storage.

Data protection solutions also need to use the cloud for full disaster recovery, not just storing a DR copy. One of the cloud’s most powerful attributes is available compute resources. Leveraging these compute resources should support disaster recovery efforts as well as performing reporting and analytics on the cloud-stored copies of data.

Many organizations today also have applications that only run in the cloud. Cloud-native applications are just as susceptible to corruption or cyber-attack as on-premises applications. It is necessary to protect and store cloud native data in another cloud region or an alternate cloud provider, or even in the on-premises data center.

Data Protection Infrastructure Needs to Change

One of the challenges with legacy data management vendors, and even some newer platforms, is that their solutions were developed to meet the data protection requirements that were the reality as they came to market. To keep pace these vendors either bolted on solutions to their core product or partnered with vendors that could provide the functionality they were lacking. Today’s data protection infrastructure is brittle and complex. It can’t keep up with today’s pace of change, let alone the rapid pace of the future data center.

Data protection once again needs to consolidate, but it needs to consolidate more intelligently. The modern data center requires more than just a consolidated data protection software solution to thrive. It needs a converged data protection operating environment which controls both hardware and software and which can fully exploit the cloud. The result is a single environment which enables initiation, management, monitoring, and control of all data protection tasks.

The key deliverable of the converged data protection operating environment must be reducing data management costs while providing IT with greater flexibility in responding to the growing demands of the organization.

Sponsored by Rubrik

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: