As we usher in 2014, industry prognosticators are publishing their predictions about what will be the hot trends in the New Year. While backup technology may not be towards the top of this list, there were some interesting developments that took place in the world of enterprise backup in 2013 that are worthy of review.
Virtualized Backup Image Recovery
One trend that emerged in 2013 was the prevalence of multiple backup software vendors introducing the capability to boot virtual machines (VMs) off of backup images. The concept here is to give end users the option to perform “in-place” data recoveries from the backup storage image as an alternative to waiting for a full data recovery to take place between the backup system and the primary storage area.
Some backup vendors claim that their offerings can perform a background data recovery to primary storage while the VM is operating off the backup image. Then when the recovery to primary storage is finished, the VM merely has to be re-pointed to the primary copy to resume normal service.
This is an interesting feature that could have some value for non-clustered environments where the only remaining available copy of the data is on the backup system. The challenge is that disk-based backup repositories are typically composed of high density, low RPM disk drives. In other words, the performance of the backup storage platform will likely be orders of magnitude less than what is available on the production storage system.
The other caveat is whether the backup image is stored natively on disk or whether it is stored in a deduplicated format. Rehydrating an entire server backup image and all its associated application data takes time and will negatively impact recovery time objectives (RTO). So while having the ability to boot off a backup image may seem interesting, it may not be practical for many environments. After all, predictable performance is key towards ensuring end user satisfaction.
One way that backup technology providers could enhance this capability, however, is if they follow ExaGrid’s lead and integrated a small non-deduplicated hard disk storage area; as we discussed in our article “How Data Deduplication Impacts Recovery”.
Another interesting option would be to integrate flash SSD storage into the disk backup platform and reserve this area exclusively for data in-place recoveries. Naturally, a data transfer would have to occur from where the image resides on high-density disk to SSD, however, as an internal bus transfer it would be relatively quick to process. And in the case of purpose built backup appliances, once the data is fully rehydrated and migrated to the SSD resource in the array, the VM’d application will have access to a performance resource potentially as good as what’s in the primary storage system.
Lastly, like the disk backup appliances that integrate native HDD landing areas, a flash enabled disk backup appliance would be able to recover the data back to the original host much more quickly, since all the dedupe heavy lifting would already be completed. It will be interesting to see if any of the backup vendors in this space will modify their backup storage systems in the coming year to include this capability.
Backup Appliances Moving Beyond Backup
Another development in the backup appliance space in 2013 was the announcement by EMC about the ability of their Data Domain platform to serve as a common repository for backup workloads and archiving workloads simultaneously. As we covered in our column “What Is Protection Storage”, EMC has continued evolving the Data Domain system to serve as an application agnostic, data protection repository for multiple business use cases. While data deduplication remains an important tenet of backup system architectures, businesses increasingly need ways to leverage shared infrastructure; particularly as data grows.
By enhancing the Data Domain file system to dynamically identify and intelligently adjust to the varied workload requirements of backup data streams (typically characterized by fewer, large files) from archive data streams (millions of small files), organizations can further consolidate backup infrastructure and simplify operational management by using the same Data Domain platform for each workload. While this feature is currently unique to EMC, it is likely that other players in the Purpose-Built-Backup-Appliance (PBBA) market space will announce similar support capabilities on their platforms in the future.
Backup “Copy” Data Continues To Move Upstream
2013 proved to be a benchmark year for Copy Data solution provider Actifio. With an upsurge in user adoption of their Copy Data storage management platform, Actifio’s value proposition has clearly resonated with the IT end user community as well as with Cloud and Managed Service Providers (CSPs/MSPs).
Our recent blog on Actifio’s capabilities provides a summary of how Actifio continues to gain presence in the data center. In short, by continuously protecting key business applications with a non-disruptive, out-of-band connection, Actifio’s platform manages a single protection copy of business data and distributes it via live clones to feed test/development and business intelligence environments. In addition to reducing the physical infrastructure required to share out data copies throughout the enterprise, Actifio’s offering can help businesses speed up product development lifecycles and improve business agility.
The premise behind Copy Data certainly isn’t anything new to the IT technology landscape. In fact, some companies claim they have been providing the same capabilities for years but just didn’t market this capability quite as well as Actifio. Regardless, given the ongoing pressures on IT organizations to slash costs and enhance operational management, we expect to see Actifio continue to move upstream into larger IT enterprise environments in 2014.
Recovery Based Licensing Could Portend A New Trend
One of the most interesting announcements in 2013 was the one made by backup software provider Asigra. After conducting an extensive market research survey in 2012, Asigra learned that many enterprise customers were struggling under the weight of their backup software license agreements (SLA). Since the cost of backup is directly tied to how much data is in the environment, organizations are effectively penalized every time they go back to re-negotiate their SLA with their backup vendor.
Asigra has taken quite a novel approach. Rather than charge customers on how much data they’re backing up, why not base it on how much data they actually recover?
As we covered in our article “Rationalizing Backup Licensing Strategies”, traditional backup licensing models have lagged behind the rest of the industry, particularly with respect to how cloud utility based computing models work. For example, cloud providers enable users to granularly select how much server compute or storage capacity they require. There is no “one-size-fits-all” model in the cloud.
Likewise, Asigra’s Recovery License Model (RLM) analyzes how much data is recovered within a given calendar quarter and then computes an average annual recovery percentage rate to arrive at a fee which is more in line with how much the customer actually used the product. In addition, the RLM re-calculates the percentage each successive quarter much like any utility service would. So if the amount of recovery operations decrease, the licensing cost decreases.
When we published our article “Big Data Demands Big Changes To Legacy Backup Licensing”, one of the first reader comments we received was that this was just a “bait-and-switch”. In other words, people would get hammered by the recovery fees. In fact, according to Asigra, just the opposite occurs. They claim their customers will save on average up to 40% off of the cost of standard capacity based licensing schemes.
Given the flat and/or decreasing nature of IT budgets, we fully expect Asigra’s RLM model to take hold and bring pressure on other enterprise backup software providers to either offer similar models or slash their licensing costs to stay competitive.
Backup Data In The Cloud Needs a Back Door
Probably one of the most newsworthy events for all of 2013 was the bankruptcy announcement of cloud service provider Nirvanix. Aside perhaps from some industry insiders that saw the handwriting on the wall several months ahead of time, this came as a big surprise to many people. Nirvanix was not some fly-by-night operation. It was well funded and had strong backing by some major industry players like IBM and Dell.
Storage Swiss covered the lessons learned from the Nirvanix shutdown in an independent white paper as well as in a companion webinar. While this may have been unsettling for those businesses currently using public cloud services or strongly considering the cloud, we think that in the long run, this will probably turn out to be a healthy thing for the industry.
As my colleague, George Crump said during the webinar, “This de-mystified the cloud and showed that it’s human.” In other words, cloud infrastructure is just as vulnerable to failures and outages as the data that resides in the four walls of your data center. This also holds true in the case of cloud based backup. While cloud based backup is a great way to leverage a third party’s data center for securely off-siting data, you can’t put all your eggs in one basket. To be prepared for any contingencies, you need an alternate way to access your data and a way to remove it from your providers cloud should you need to.
For example, an alternate method for accessing your backup data could be as simple as cutting a tape backup and keeping it onsite. This provides several benefits. First, if for some reason you can’t access data in the cloud, you can always fall back on the local tape copy. Secondly, if you need to perform a large restore, you won’t have to trickle all the data over WAN links. As we covered in our article “Can Tape Save Cloud Storage”, we believe there are many practical roles for the use of tape in the cloud.
Other options include storage-infrastructure-as-a-service (SIaaS) offerings. For example, companies like Nasuni combine an efficient layer of flash storage for local data access with automated backup to the cloud and provide the subscriber with an annual fee. Backup data can be directed to one or multiple cloud providers, for redundancy, and typically the 2nd copy is just a fraction of the cost of the primary backup copy; making this a financially viable way to have your business data protected across multiple clouds.
Likewise, there are technologies ranging from cloud gateway appliances to software based virtual appliances that allow local applications to utilize storage in the cloud. Some of these offerings can mirror data across multiple clouds for data redundancy. In addition to providing organizations with data resiliency, these solutions can be used to migrate data from one provider to another. So if your provider goes out of business or you just negotiate better terms with a new provider, you can port your data across cloud environments.
In short, there are a number of ways to protect against data loss in the cloud. While evaluating CSP services, be sure to inquire about what methods they have in place to help you remove or migrate your data. By now they are probably used to getting this question. The key is to make sure you don’t paint yourself into a corner.
The backup solution marketplace is as competitive as it has ever been. This is offering businesses of all sizes more options for protecting their data at increasingly lower costs. This is essential given the fact that data growth is driving infrastructure and operational costs ever higher. Those backup manufacturers and service providers that continue to innovate and deliver more value to their customers, while lowering the total cost of backup ownership, will continue to take market share. We’ll keep a close eye on the market throughout 2014 and keep you plugged-in to all the noteworthy developments.