Most data protection solutions use the public cloud as a digital dumping ground to lower the cost of on-premises data protection infrastructure. To save costs, vendors often store the backup data set in a low-cost object store like Amazon’s Simple Storage Service (S3). When storing the protected set, these vendors usually store data in a proprietary format, which reduces accessibility and reusability. To advance the state of cloud data protection, vendors need to focus on providing instant access both for recovery of workloads and reusability for other use cases.
The State of Cloud Utilization for Data Protection
Many vendors only use the cloud to store an exact copy of the backup dataset, which effectively makes the public cloud a tape replacement but does nothing to shrink the on-premises infrastructure. Others may use public cloud storage as a tier, and move old backups from on-premises, reducing backup storage infrastructure. A few are attempting to use public cloud computing in addition to cloud storage to create a disaster recovery as a service (DRaaS) offering, but are learning that DR in the cloud has almost as many issues as a recovery in a customer-owned site.
The Backup Format Problem
The single biggest challenge is that most data protection vendors don’t store data in its native application format. To improve on-premises backup performance they package data up into large bundles before saving it to disk. These proprietary formats persist as the vendor moves the data to the cloud. The problem with storing data in a non-native application format in the cloud means that it needs extraction before being leveraged by cloud services or for disaster recovery, which increases the recovery time objective (RTO).
The Object Storage Problem
Object storage is a very cost-effective method of storing data. It has built-in scaling and durability capabilities that make it ideal for long term data retention. Object storage is not, however, typically suitable as storage for production applications. If the vendor stores data on S3, their customers must copy or restore that data to another tier within the cloud infrastructure before actually using it. The time to move datasets from S3 to Amazon’s Elastic Block Storage (EBS), for example, can take over an hour per terabyte. Add the time to extract that data from a proprietary format and the time to recover data to EBS can go up exponentially. Storage Switzerland has spoken with Amazon customers that report taking over 24 hours to recover a 6TB database.
The Return Home Problem
In most cases, if the customer can successfully recover in the cloud, they will want to return their operations home, back to the original data center. The problem is while the organization was in a DR state, they were changing and creating data, and they need to transfer all that changed and new data back to the primary data center. Even if the on-premises data center has most of the data, most data protection applications need to restore the entire data set. The cloud compounds the problem because of its slow transfers and its egress fees.
Introducing Actifio 10c – Advanced Cloud Data Protection
Actifio follows a different model than the traditional data protection solution. First, it stores data in native application format making it accessible near-instantly to almost any process or service. Second, it gives organizations the option as to how much they want to invest in on-premises infrastructure. They can have a complete copy, a working set or no on-premises backup storage infrastructure. Actifio can instantly mount a VM’s datastore directly from on-premises or cloud object storage. That data is then streamed, from either location, back to the production storage. The VM is immediately accessible as Actifio restores data in the background.
In their latest release, Actifio 10c, the company added a reverse change block tracking capability so that it restores only the data needed for recovery. If any of the on-premises backup cache survives the disaster, it is not re-transferred. This streaming capability eliminates the “return home” problem, and the reverse change block tracking significantly lowers recovery time and cloud egress costs.
Actifio 10c also supports multiple backup targets. Customers can back up to an on-premises object store or NAS and to the cloud at the same time. New in 10c is support for Dell/EMC’s Data Domain storage systems via its DDboost protocol. In 10c, customers can also copy data to multiple public clouds at the same time for the ultimate in disaster preparedness or to seed the clouds for different use cases. Again, since it is in native format, these services can directly access it. Today Actifio supports Amazon AWS, Google Compute Platform, and IBM Cloud with one click DR orchestration capability. Since Actifio stores data in a native format, it is available to cloud native services such as AWS Redshift or Google BigQuery for analysis and processing.
Actifio 10c also solves the problem of moving data from a cloud object store to a cloud block-based storage infrastructure. It does this by initiating an SSD cache in-between the object store and the block-based store. With this capability Actifio gives instant high-performance access to data from primary cloud computing, without having to wait for all the data to migrate to the block-based store. This feature makes using the cloud for at-scale testing and analysis much more cost-effective.
A significant new capability available in 10c is DR Orchestration that enables Actifio customers to create and automatically execute disaster recovery plans. They can preset the network, by setting the order of recovery, as well as execution of pre, and post-recovery scripts. The result is simple one-click recoveries back on-premises or in the cloud. DR orchestration rewards IT for investing the time in planning for disaster. It makes it easier to update plans and to test them. The Actifio orchestration tools will also automatically instantiate more of its SKY appliances to make sure that a massive recovery effort, executes quickly.
Organizations can also use DR Orchestration for cloud migrations. The capability can continuously seed a sandbox during testing and then perform the final cutover when ready. It can also inject these workloads into containers instead of VMs to further help organizations modernize their data center operations.
Once the migration is complete the customer can continue to use Actifio to protect the cloud native version of the workload. All the same capabilities apply including the ability to copy data to another cloud. They can use an agentless approach and leverage cloud snapshots, or they can use Actifio’s native solution, which creates more consistent copies of data and more rapid recoveries.
StorageSwiss Take
Actifio 10c is a significant upgrade. Its features enable organizations to recover rapidly, anywhere. It also helps customers curtail costs by adding performance to object storage to enable it to serve many use cases. The release’s reverse change block tracking allows companies to improve on-premises recovery times while reducing egress fees. Its DR Orchestration enables IT professionals, to keep up with the pace of change within the data center. Disaster recovery planning is becoming a lost art, but DR Orchestration allows for its rediscovery. Lastly, it helps with digital transformation. Customers can leverage Actifio 10c to not only migrate workloads to the cloud but to also protect them once they are there.