Scaling Out Backup

Secondary storage, the storage that holds among other things backup data, will potentially store 10X the amount of capacity primary storage will. In addition, organizations are expecting more from their backup solutions; faster recoveries, data management and copy data management. The growth of secondary data and the increased expectations of these processes is pushing backup architectures to the brink. Backup, both hardware and software, needs to borrow from the distributed architectures of the cloud in order to meet these demands.

Data Protection is Under Attack

Data protection, really all secondary data processes, are under attack. It is no longer enough to simply back data up on the cheapest device possible. Application owners and users expect IT to meet tighter RPO/RTOs, which means more frequent backups and the ability to present recovered volumes directly from backup storage. Users also expect IT to leverage protected copies to feed test-dev and DevOps environments, populate the DR site with the right data and provide copies of data to analytics engines. They also expect the data protection process to feed and/or be the archive process, which means the process now has to meet regulation and legal hold standards.

The traditional data protection architecture of a backup server connected to a deduplicating backup storage device is no longer able to meet these demands. Given the absence of backup software evolution, organizations are seeking out new solutions to provide faster recoveries, copy data management and data archiving. The problem with implementing these solutions is they are unaware of the existing capabilities of the backup software and become another silo for IT to manage.

Distributed Backup

Distributed computing revolutionized the cloud. Clusters of commodity servers scale compute and storage with the addition of each node to the cluster. There are scale-out architectures for the backup market, but they typically only scale-out to meet the capacity demands of the environment. The backup and data management software does not scale to also take advantage of the additional compute these systems are providing. As a result, in most data centers with scale-out secondary storage, the compute resource goes largely underutilized.

Hyperscaled Data Protection

Instead, organizations need to look for data protection solutions where the software takes advantage of the compute power as the scale-out architecture scales, following more in line with a distributed computing model. Some vendors may go so far as to apply attributes of  hyperconvergence, creating a hyperscaled solution where the backup software creates a cluster and manages the distribution of both the backup software as well as the backup capacity.

To learn more about hyperscaling backup, watch our ChalkTalk Video “Hyperscaling Data Protection”.

George Crump is the Chief Marketing Officer of StorONE. Prior to StorONE, George spent almost 14 years as the founder and lead analyst at Storage Switzerland, which StorONE acquired in March of 2020. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Prior to founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,787 other followers
Blog Stats
%d bloggers like this: