Change is good…or is it? The sheer volume and velocity of changes taking place in IT environments today is staggering. While change may be good for business needs, if left unchecked, improperly managed changes within the backup environment could deal a deadly blow to a business if disaster strikes. Organizations may benefit from a real time monitoring reporting tool which exposes vulnerabilities within the backup infrastructure and enables proactive administration of the environment before problems occur.
Server virtualization technology has ushered in a whole new paradigm for data center change. While the ability to rapidly provision, re-assign or de-commission servers according to business needs greatly enhances business agility, it introduces a whole new host of management challenges which ultimately can compromise the safety of business data.
Often regarded as the last line of defense for ensuring the protection of critical business data, legacy enterprise backup systems are somewhat limited in terms of the depth of reporting they can typically provide in dynamic environments that are ever changing. Furthermore, most backup applications don’t provide a holistic view of all the end-to-end components that comprise the entire backup system – backup software, disk/tape hardware, networking equipment and file servers.
To further compound this problem, many environments have more than one backup application in use. Indeed with the proliferation of virtual server hypervisors like VMware, many server administrators have now assumed control of backing up their own virtual machine (VM) images. Additionally, backup administrators may utilize one application to protect data across the enterprise, while database administrators may use native database backup tools.
In short, it is not uncommon to have multiple applications, silo’d backup targets and application owners protecting the same underlying data.
Backup Blind Spots
Ironically, despite multiple overlapping backup processes, it is virtually impossible to determine at a glance if data is in fact being backed up or is adhering to recovery time and recovery point objectives (RTO/RPO). Ultimately, regardless of who is controlling the backup process, the final responsibility for ensuring that backup systems are adhering to internal service level agreements (SLAs) rests with backup administrators.
As a result of these uncertainties, a new level of automation is required to mitigate the risks associated with a highly changing dynamic environment. Data center operators and backup administrators require a tool which understands all of the complex relationships within a backup environment and virtual infrastructures. In short, a tool that can handle what is not intended for humans to do – the laborious and rigorous work of mapping out all the interdependencies of a highly fragmented backup environment – in real-time.
When backup or restore jobs fail, it is typically a highly manual process to co-relate data amongst multiple, disparate hardware and software elements to troubleshoot the problem. Backup operators may have to switch between the backup application management console, a disk backup appliance or tape library management interface, NAS file systems and even the network switch to conduct root cause analysis. Now imagine trying to do all this work when the pressure is on to restore a critical production application.
In addition to automating root cause analysis to quickly isolate offending backup components and speed up problem resolution, backup administrators need tools which can proactively identify conditions like network choke points, failed disk/tape resources or an orphaned VM, BEFORE a backup job is scheduled to occur. This capability alone could dramatically enhance backup service level agreements and restore confidence in IT’s ability to consistently protect and recover core business data.
Adrift On a Sea of VMs
Server virtualization presents its own set of risks towards ensuring the protection of critical business data. VM images can be rapidly provisioned or re-located to help businesses speed up time to market, however, sometimes in the chaotic rush to meet business demands, standard data protection policies and procedures may get overlooked. With businesses supporting hundreds, if not thousands of unique VM images, it is not unusual for new VM images to go completely undetected by the backup administrator.
A backup monitoring tool would need to integrate well with VMware, to identify and flag any unprotected VM images, as well as track the movement of host images between VMware ESX systems. In addition to ensuring that critical data is not going unprotected, the ability to alert if overlapping backups threaten to compromise VM application system performance is a critically important feature for environments where multiple backup applications are in use.
Furthermore, a single consolidated view of the health of all backup applications running within the enterprise – whether the applications are all located in a single facility or are geographically dispersed amongst multiple data center sites is a key requirement to properly support critical business data. For backup administrators and Chief Risk Officers tasked with ensuring the protection of business data, this could be an indispensable tool.
In order to meet strict regulatory requirements, many financial companies must conclusively demonstrate that they are protecting critical business and customer information across a wide geography. The ability to produce empirical reports which conclusively demonstrate to auditors that data is actively protected and meets recovery point and recovery time objectives is a feature lacking from most enterprise backup applications. In addition to reporting on backup processes, this feature would also need to track the success of data replication processes to ensure that disaster recovery objectives are being met.
Perhaps the bane of existence for every storage planner is the annual capacity planning exercise to determine how much money to ask for in the upcoming fiscal year. Forecasting storage and backup growth is an imperfect science to say the least. Lacking empirical data, most planners can only go by what they’ve spent in prior years and combine that growth percentage with any additional requirements that may be needed to support upcoming projects.
IT planners need the ability to produce accurate budgetary forecasts based on data extracted from within their own storage environment. A software tool which can collate data from multiple back-end storage resources and maintain a historical trending report to support IT financial planning is essential to avoid overspending and inefficiency. In this manner, IT planners can effectively demonstrate that current operations are both streamlined and efficient and that budgetary requests are based on the “facts on the ground.”
The highly dynamic nature of IT production environments is driving the demand for real time monitoring capabilities of data protection infrastructures. Despite all the investments made in backup and disaster recovery infrastructure, one minor undetected change in production can put important data at risk in even the most sophisticated backup and recovery environments. Monitoring tools, which can proactively flag data protection problems in real time before threats emerge, are becoming a must in today’s ever changing IT landscape. An enterprise data protection management solution, much like that offered in EMC’s Data Protection Advisor, can provide proactive global monitoring and reporting capabilities required to support the complex, highly distributed backup infrastructures in use today.
EMC is a client of Storage Switzerland