Covering Both Ends of DR

One of the lessons IT is learning from Hurricanes Harvey and Irma is there are some situations where the organization really does not need to meet, or even come close to meeting the strict recovery point and recovery times it establishes. Frankly, even if those businesses impacted by Harvey did meet those RPO/RTOs, the reality is their employees were, rightfully, taking care of their families and community.

Understanding the Disaster Reach

Thankfully, every disaster is not of the same scale as a major hurricane and people are far less patient with minor inconveniences than they are with major disasters. When the reach of the disaster is contained to just one organization, or worse, one application or storage system, IT must respond quicker to that outage than they would for a major disaster whose reach is hundreds if not thousands of businesses. Each disaster type, major and minor, requires a different type of protection and a different type of recovery.

The challenge for IT is how to solve two data protection problems that are seemingly at odds with each other. Recovery from minor disasters requires frequent data protection and rapid recovery. Recovery from major disasters requires data is isolated and far away from the disaster’s reach. Also recovery is typically two phased. First, there is recovery at the secondary site, then later there is a recovery back to the original primary data center.

The traditional solution for rapid recovery is replication, where data is copied, typically at sub-file level to either secondary storage on-premises, to a secondary data center or to a cloud provider, and in some cases all three. Most replication solutions have some form of journaling capability so if data corruption enters in, the secondary copy can roll back to a clean version of data. But these journals are not intended to keep all data forever. Ideally, replication is the copy of data that fulfills requests like, “I need a copy of that data as it looked 5 minutes ago.”

The traditional solution for longer term retention of data is backup. Ideally, the tool is used to make a copy of all data to secondary storage, and a copy of that data is replicated or transported off-site. It provides the recovery point of last resort for the mission critical data that is being replicated, as well as the data that did not need those rapid RPO/RTO levels. It is the copy that fulfills requests like, “I need a copy of that database as it looked 3 months ago.”

Backup also plays a role in protection from ransomware. Assuming the backup application has taken adequate measures to protect itself from the malware the backup is the copy to go to if ransomware encryption has infected the entire environment included replicated copies of data.

Hybrid Data Protection

The data protection sweet spot, where each application and data set is covered by just the right level of protection, will vary from organization to organization. Traditional data protection solutions need to evolve into a more hybrid approach, where a single vendor can provide both replication and backup to a variety of backup types including the cloud. Doing so enables IT to control the whole process through a single interface and a single point of support.

At the same time, the solutions should be separate enough so they can be used independently since most data centers aren’t looking to replace their high availability and backup solutions at the same time. Typically they have a pressing need in one area, and will address the other area in the future. The separation of the solution allows the vendor to start small within the data center, earn IT’s trust and then expand as the need arises.

To learn more about using hybrid data protection to hit your organization’s sweet spot, watch our on demand webinar to learn:

  1. Establishing a Data Protection Plan that will meet your organization’s RPO and RTO requirements while also meeting data retention requirements.
  2. Identifying which applications need the tightest RPO/RTO and how to craft a solution to meet them.
  3. Identify which data needs to be retained and how to meet those retention requirements.
  4. How to protect the organizations from natural disaster and ransomware with a single process.

Watch On Demand

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,128 other followers

Blog Stats
%d bloggers like this: