Don’t Forget Data Protection When Selecting Cloud Providers

The cloud is good at availability and data durability. If there is an outage cloud providers have an excellent track record of getting their services back online quickly and providing access to the latest copy of data. What if though, the organization needs access to a previous copy of data? Availability services are real-time, but previous versions are required to recover from cyber-attacks, rogue users and data corruption from misbehaved applications. While most cloud providers provide snapshot capabilities, they typically lack the ability to automatically create stand-alone, point in time copies of data, which are needed for recovery from human caused disasters.

Watch On Demand

The data protection services available for the cloud provider’s environment are a critical factor, among others, in selecting that provider. Unfortunately, it is often overlooked or assumed to be present. In this blog series we walk IT planners through the cloud selection process while focusing in on data protection as a key requirement.

What to Look for in Cloud Providers

The major cloud providers have a lot in common. They all provide infrastructure (compute, network and storage) on a pay as you need basis. The “as you go” consumption model is popular for organizations looking to reduce data center CapEx and better match the way they deliver services to their customers.

Within cloud provider offerings there exist unique offerings like a specialization on machine learning or image recognition. Most organizations will be more interested in the core capabilities of these services. The operative word in selecting a cloud provider is flexibility. The service should adjust to the organization’s needs as those needs evolve.

Flexible Machine Types

The needs of an organization change over time. A key requirement is flexible machine types. As new workload requirements arise, the organizations will want to create different types of virtual machines as needed. Google Cloud for example, provides many different machine types including; standard virtual machines, high memory virtual machines, high CPU virtual machines, shared core virtual machines, micro bursting virtual machines, memory optimized virtual machines and custom virtual machines. Additionally, GPUs can be added to any of these virtual machine types, for additional processing power.

Short Lived Virtual Machines

Cloud native applications are written to take advantage of all available and assigned CPU resources. If a large processing job comes in, the organization may choose to assign thousands of processors to the task to reach completion sooner. Most cloud providers require a minimum “buy-time” of at least an hour. That means if those thousands of processor complete the job in 10 minutes, the organization has to pay for 50 minutes of idle time. Google Cloud provides very granular, per second, processing rental.

Point-in-Time Data Protection

Another key area, and one that this blog series will focus on, is data protection. Most cloud providers lack basic data protection. They can’t create periodic stand-alone copies of data. In situations where the organization is dealing with cyber-attacks, rogue users, human errors and data corruption from misbehaved applications, these point in time, stand-alone copies are required.

Most cloud providers replicate data between storage systems within the primary cloud data center and then replicate it again to a remote data center. The problem is that the replication happens almost instantly. Data corruption caused by a cyber-attack or application coding error is immediately replicated to all the other storage systems. There is no “air-gap” between copies.

Snapshots provide some protection, although they are typically taken too frequently. The problem with snapshots is these copies are totally dependent on the primary copy being accessible. The other problem with most cloud snapshot technologies is their use isn’t integrated into the application.  Whenever they have a change application owners must manually execute snapshots, or develop scripts to execute snapshots. For a couple of VMs this may be easy, but as they expand and as the environment grows it becomes very hard to maintain. In addition, since snapshot storage costs consume capacity, which in the cloud, increases the size of the monthly cloud bill. To manage costs application owners need to be aware of how many snapshots they keep and also remember to delete those snapshots or move the copies to an alternate location and manage that bucket.

Organizations that are moving some or most of their workloads to the cloud need a data protection solution that is similar to the capabilities of their on-premises backup without the management overhead of it and something an applications admin can take care of.


There are many different reasons to select a cloud provider. Organizations should focus on flexibility of consumption and of course price competitiveness. They shouldn’t be wooed by the largest provider or the one that dominates the headlines. IT needs to be careful though, not to overlook the importance of point-in-time data protection as part of their consideration. In the data protection area, there is a surprising amount of differentiation between the various cloud providers.

In our next blog, we’ll discuss the differences between the protection mechanisms that cloud providers use compared to stand alone backup solutions. Watch the on demand webinar “Backup as a Service” now.

Sponsored by HYCU

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: