Storage managers face the paradox of needing to store seemingly unlimited pools of data for business analytics and compliance, and at the same time enabling recovery of Tier One applications instantaneously and with zero data loss. Against this backdrop, a tiered backup storage strategy that integrates both high-performance and high-capacity storage systems becomes a lynchpin of future forward data protection.
Business-critical applications carry strict recovery time objectives (RTOs, the duration of time that it takes to restore an application after a disaster) that typically necessitate replication and failover to secondary infrastructure resources (whether on or off-premises) that perform (at least nearly) as fast as the primary production data center. However, these workloads account for only approximately 10% or less of data that is restored in the event of a disaster. The vast majorities of workloads carry less stringent RTOs and do not require production-grade performance during recovery. As a result, they may be stored on a lower-cost backup infrastructure to avoid breaking the bank.
Integrating object storage as a backup tier serving “colder,” capacity-oriented as opposed to performance-oriented data can help enterprises to substantially cut the costs of their backup infrastructure. Object storage architectures store data in discrete buckets called “objects” that are kept in a central repository – eliminating the hierarchies of file and block alternatives. Objects are managed via extensive metadata (data about data, such as the date of the most recent file access) that may be customized (as opposed to the fixed attributes that are inherent in file and block storage).
Object storage is slower-performing than file and block architectures, but its utilization of a flat, geographically dispersed storage pool as opposed to directories and subdirectories enables it to be more easily scaled out. Meanwhile, the more extensive metadata tagging enables data to be easily identified and analyzed by users (including analyzed for usage of data), regardless of where the data or the user is physically located.
These characteristics make object storage a common underpinning for public cloud services such as Amazon S3, and solutions are also available for on-premises deployment – providing a number of options for storage buyers. Flexibility of deployment options and the ability to integrate with third-party cloud storage services are both important characteristics in an object storage provider, to enable flexibility to meet workload-specific needs in areas such as cost and performance.
When considering an object storage solution for purchase, storage planners should furthermore bear in mind the ability to start small (thus avoiding over-provisioning) and non-disruptively add nodes to clusters as capacity needs grow. Replication and erasure coding should be evaluated to ensure recoverability of data – as should multi-tenancy and encryption to support data privacy and security.
Register now to join subject matter experts from Storage Switzerland, Cloudian and Veeam for the on demand webinar, “Three Steps to Modernizing Backup Storage,” that covers more tips to creating a tiered backup architecture that facilitates rapid recovery where needed and also integrates lower-cost retention storage to keep the budget in check.