Organizations are asked to store unprecedented amounts of data. Protection of this data to a secondary storage device and replicating it off-site is critical. For most organizations, the capacity requirements of secondary data, data used for backups and other purposes, is ten times the capacity requirements of production stores. The enterprise backup application is the ideal candidate to manage this secondary data.
Most organizations leverage either a scale-up or scale-out storage architecture to try to keep up with the secondary data requirement; the problem is both designs tend to fall well short of what the organization needs. In this ChalkTalk Video, we discuss the problems with both the scale-up and scale-out architectures and what IT can do to address the problems.
The backup software needs to do both, scale-up to fully utilize backup hardware and scale-out as to keep up with the growth in primary storage. Unfortunately, most backup applications are scale-up only. They count on the backup storage hardware to scale-out. There are two flaws in this design. First, it creates two separate points of management with two distinctly different architectures.
Second, the design assumes that a single server can provide all the compute the infrastructure needs. Given the new responsibilities, like instant recovery, archiving and copy data management, the backup software requires more CPU power than a single server can provide.
The lack of flexible scaling options is just one of the ways that traditional backup architectures are breaking. To learn the other reasons, watch our latest on-demand webinar “10 Reasons Why Backup is Broken and How to Fix it”.