Analyst Blog: The Data Protection Network Gap

Storage Switzerland has been tracking an interesting trend, or maybe lack of a trend, in data protection over the last 10 months; the decay in the quality of the data protection network as compared to the continual improvement of the primary storage network. Primary storage and its accompanying network are making quantum leaps forward in terms of performance innovations, while very little has changed in backup networks is. As a result, there is an ever widening gap in performance between these two networks that threatens to impact an organization’s ability to protect and recover from a server or storage system outage.

There are two basic solutions to this problem. The first, and somewhat obvious, is to make sure that we are upgrading the data protection network with every successive upgrade to the primary storage network. This can be expensive, however, especially when you consider the frequency of performance upgrades that are occurring on primary storage systems and its network environment.

The second solution is to get smarter about the way we backup and recover data, so that the data protection network can be recovered at a faster pace. This option includes more intelligent data transfer, both in terms of backup and recovery, so that less data needs to be transferred from application servers to the backup devices.

The concept of more intelligent data transfer is becoming common in virtual environments. For example, VMware delivers a robust change block tracking (CBT) capability that only transfers the blocks that changed since the last backup. Microsoft’s Hyper-V is lacking this capability, but the operating system is open enough that software vendors seem to have little trouble porting this functionality to the Hyper-V platform.

There is one major shortcoming of CBT; only a finite number of CBT backups can be performed before it becomes necessary to run some sort of consolidation job to refresh the original baseline full backup. Without these consolidation efforts, backup performance typically suffers. The problem with consolidation jobs is they are very time consuming. During a consolidation job, there can be a large amount of network I/O taking place between the backup device and the backup server. Consolidation also requires a measurable amount of backup server CPU as it decides what data is needed to create the new consolidated image.

As a result, many data centers find that it is faster to just run a complete, net new full backup across the network. Of course, if your data protection network has not kept pace with your primary storage network, we are back to square one. As we discussed in our recent webinar, “The 5 Ways Your Backup Design Can Impact Virtualized Data Protection“, now available on-demand, some data protection appliances can actually execute the entire consolidation job on the backup appliance, saving both network I/O and backup server CPU cycles.

From a protection standpoint you may be better off just having multiple stand alone copies of the data you are protecting, especially for business critical applications like SQL and Oracle. But the problem is that the entire database would have to be transferred across an aging backup network and, of course, you’d also have to pay for the extra backup device storage capacity to maintain these full images. This is where something like EMC’s DD Boost software could be leveraged. Boost acts like something of a data traffic cop by identifying changes to application data, like Oracle, at the client-side, so that only the unique changes to the data have to traverse the backup network to the Data Domain appliance. And, of course, deduplication at the appliance eliminates the capacity concerns caused by storing multiple copies of the same database image.

While originally intended for backup applications like NetWorker, vRanger, NetBackup and Backup Exec, DD Boost now has specific support for the backup utilities within applications like Oracle and MS-SQL. The results are impressive. Recently on EMC’s Protection Continuum Blog, Alyson Langon discussed how Data Domain’s DD Boost for SQL demonstrated a significant performance improvement over regular SQL backups. While the performance gains over a 10GbE network were impressive, the gains over a 1GbE network were even more so. We also think the 1GbE network test is more relevant given the state of many data protection networks.

The next step is to facilitate rapid data recovery. CBT and source side deduplication can help with data protection, but they really don’t do much for the recovery process. In an upcoming entry, I will cover the steps you can take to recover faster on an aging data protection network.

 Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,236 other followers

Blog Stats
  • 1,554,490 views
%d bloggers like this: