Druva is a market leader in end-point data protection with over 3000 customers in 70 countries. They see cloud backup approaching a ‘tipping point’ which could enable it to become a de facto standard in the backup industry, especially if the challenges of cost, performance and data security can be resolved. Recently Druva announced a new product called Phoenix that’s designed to address these concerns and move them from end-point protection to complete protection of the data center.
Druva’s Phoenix is a unified backup and archival solution that offers unlimited cloud-based data retention. It uses agents loaded onto each client server to backup and provide an ‘infinite’ set of snapshots of protected data sets, while sending the delta changes to a backup repository in the cloud. Druva also offers an optional, on-site storage device, the “CloudCache Appliance”, to store snapshots locally and to help manage data transfer with the cloud.
When Phoenix performs a backup, the snapshot metadata is first sent to cloud storage where it’s combined with the metadata from other remote office/data center locations, if present, to create a metadata set that represents a universal backup repository. This information enables Druva to globally deduplicate all the data within an enterprise, across locations, providing maximum deduplication efficiency. The pertinent subsets of this metadata are sent back to the servers being backed up or the CloudCache Appliance to support their deduplication efforts. Then, the backed up data blocks themselves are transferred to the cloud.
3-Tier Cloud Backup
Instead of using their own cloud infrastructure, the Druva system currently uses Amazon S3, but may eventually include other public clouds as well. Their rationale is that public clouds have global footprints and can more easily geo-disperse data blocks as needed across multiple “availability zones”, providing users with a higher level of data protection and resiliency than a proprietary cloud backup service can.
Public clouds storage also costs less than cloud offerings from individual backup companies, an important factor when the cloud is used to support a long-term archive. Druva also leverages three tiers of storage to help keep costs down. Tier one is the on-site CloudCache Appliance, tier two is Amazon S3 and tier three is Amazon Glacier.
Other Cloud Backup Options
Most of the legacy backup applications have added a cloud service to their offerings but many of these ‘direct-to-cloud’ solutions can have major latency issues as backup speed is dependent on available bandwidth. Other cloud backup solutions use a single on-site appliance to store local backups and then replicate that data to the cloud, using the appliance to store backups locally and provide faster restores. But according to Druva, these ‘disk-to-disk-to-cloud’ (D2D2C) solutions suffer from the private cloud cost problem described above. They also can’t typically do global deduplication.
As a differential-based backup solution using snapshots, Druva doesn’t store any redundant data, as would be the case with traditional backup systems that periodically take full backups or create ‘synthetic fulls’. This makes the cost of keeping archived data simply a function of maintaining existing stored data, a process that Amazon’s low-cost Glacier was designed for. Phoenix also leverages true global deduplication to minimize the size of the actual cloud data set.
The Phoenix service is priced based on the amount of source data backed up and saved (before deduplication) and costs ~$1 per month per GB stored. In addition to file backup, they also have an agent for MS SQL Server, with a VMware agent scheduled for release in late 2014.
Cost, Performance and Security
Like the D2D2C backup solutions, Phoenix stores the most recent copy locally, on the Cloud Cache Appliance, helping to address the latency issues of legacy direct-to-cloud backup solutions. But Phoenix’s use of successive snapshots and their advanced metadata process reduces the volume of data the system handles and sends to the cloud, further improving backup system performance. They also leverage the public cloud to provide a lower TCO than D2D2C solutions, based on this global dedupe and a two tiered cost structure with Amazon Glacier. Finally, by using Amazon’s cloud infrastructure, encryption and geographically dispersed availability zones, Phoenix can address users’ concerns about data security.
Cloud backup was originally an SMB and consumer solution, for the most part, as mid-size and larger companies with bigger data sets had trouble with the cloud’s inherent latency. Hybrid backup solutions addressed much of the latency problem, but still relied on proprietary clouds that have a higher cost structure than the public clouds.
Druva’s Phoenix solution seems to address this cost issue by using Amazon’s S3 and Glacier and their global deduplication helps there too. Their snapshot-based backups and metadata management process looks to further improve backup performance over most D2D2C solutions and make this product cost-effective as an archive solution as well. Only time and market adoption will tell but Druva’s Phoenix may be a logical point of entry for businesses to start consuming cloud backup.