Developing a Practical Cloud Strategy for Traditional Data Centers

Many organizations want to develop a cloud strategy, but most don’t know where to begin. As a result, the organization stumbles into the cloud, often starting with using the cloud as a secondary backup storage area, with the hope that cloud utilization will evolve from there. The reality is that other than the adoption of a few Software as a Service (SaaS) applications, the organization’s use of the cloud never really goes much beyond the backup use case. Without a strategy and an understanding of the cloud’s potential for the organization, as well as an understanding of how to move to the cloud, the chances of full adoption are very slim.

Part I – Understanding What the Cloud Can Do

The cloud has two critical resources: cloud storage and cloud compute. Ironically, the most valuable of those two resources – cloud compute – is the least adopted by traditional data centers. Indeed, the cloud can be a very cost-effective storage location, ideal for the popular initial use case of backup, and potentially more valuable for a far less popular use case – archive. Both use cases count on the cloud to store non-critical copies of data for a long duration. They often use one of the least expensive storage areas in the cloud, an object-based storage system, which is designed to be cost-effective and not necessarily high-performance.

It is crucial for IT planners to move out of the “using the cloud as a digital dumping ground” mentality and look to leverage all aspects of it, especially compute.

Disaster Recovery as a Service

The first step for a successful cloud strategy is leveraging the cloud as a disaster recovery site. In this use case, the application replicates or backs up data to the cloud and then, in the event of a disaster, uses cloud compute to instantiate virtual instances of on-premises applications. The value of the cloud here is that it can eliminate the costs associated with maintaining a disaster recovery site. It also makes testing significantly easier, since the cloud is always ready and accessible.

Using the cloud for disaster recovery takes advantage of the cloud pricing model. For disaster recovery, only the latest copy of data needs to be stored in the cloud and the CPU required to launch those virtual machines during a disaster or test only needs to be rented when a disaster is declared.

There are some concerns with using the cloud for disaster recovery. The first is selecting the most appropriate type of cloud provider. The second is deciding how the organization will move data to the cloud (replication or backup). Another challenge is determining how the organization will transform their virtual machines to run in the provider’s cloud since most providers do not use the same hypervisor that data centers do.

There are also networking concerns. The first obvious networking challenge is the initial connection, which most solutions will overcome. But, there is a second challenge: How will the networking perform during a disaster? If there is a data center failure, it will be necessary to reroute users seamlessly to the cloud instance of the application. These users may be coming from other offices, or they may be users who usually connect to the fallen data center and are now working from home or a local coffee shop.

Cloud Bursting and Permanent Migration

Using the cloud for a disaster recovery site sets the foundation for the next step in a cloud strategy, which is utilizing the cloud as a primary site for specific applications. The bursting use case will move workflows to the cloud temporarily when demand for data center resources exceeds what is available. Cloud migration is the permanent move of an application to the cloud. Over time, especially with the permanent use case, the organization may want to modify the application to take advantage of scale-out compute.

Running in the cloud means the organization will need to either modify their application(s) to access cloud storage via native protocols or load a file system in the cloud that emulates what that application expects to see in a file system. The emulation choice allows for migration to the cloud to happen faster since no application changes are required.

IT planners will need to consider what type of tool to use for bursting and migration efforts. In the bursting use case, potential applications for temporary movement to the cloud will need their cloud copy continually updated with data from the primary data center. When peaks occur, there is no delay in cutover time or waiting for updates to the cloud copy. Migration does not require a constant update but may involve that the on-premises version remain online while data is copied to the cloud, facilitating a more seamless cutover.

In both cases, the organization will want to test the application in the cloud to make sure it works as expected under production loads. It will also want to ensure that they can quickly move out of the cloud, back to on-premises if something goes wrong.

Additionally, in both cases, the organization must ensure the protection of cloud-based applications. IT planners should not assume that being in the cloud is protection enough. Cloud providers operate data centers but those data centers are vulnerable to the same failures that any data center is. At a minimum, IT will want to make sure changes made to data by the cloud-based application are replicated to another region in that cloud; to another cloud altogether; or back to the on-premises copy of the application. In the future, a role for the legacy data center may be as the disaster recovery location for cloud-hosted applications.

A final use case is temporarily leveraging cloud compute to process data outside of the organization’s standard applications. For example, most organizations capture video surveillance data. That data is typically stored for a short period and then deleted. But image recognition technology organizations are capable of keeping this data for longer and longer periods of times – for both security and marketing reasons, as organizations can now examine images for tracking customer trends, movement through a store and satisfaction based on facial expression.

Cloud providers offer these advanced data processing and analysis capabilities as services and each provider seems to be developing unique skill sets. As a result, the organization may want to replicate data to the cloud, while also having the freedom to move data between clouds.

A Cloud Strategy for Traditional Data Centers

A traditional data center is one that is likely to be heavily virtualized, typically using VMware as its primary data center. It also may have ventured out to other hypervisors including Microsoft’s Hyper-V or one of the Linux hypervisors. It could also have some physical systems, probably in clusters – like MS-SQL or Exchange. Most of these organizations have some cloud initiative. Typically, they are using the cloud for some backups or maybe archiving. They do not entirely exploit the cloud and don’t really have a strategy or a set of solutions that will allow them to do so.

A traditional data center lives in the practical world and therefore needs a practical cloud strategy. Organizations need a “crawl, walk, run” approach to cloud adoption, as jumping in with both feet is not realistic.

A practical cloud strategy is built first by using the cloud as a backup and replication target, then advancing to using the cloud as a disaster recovery site and finally for cloud bursting or simple application migration. Perfecting these three pillars then sets the organization up for more advanced use cases, like an application built in the cloud that will permanently reside in the cloud. But, even the advanced use cases will need aspects of the practical cloud strategy, like data protection and cloud-to-cloud data movement. Ultimately, nothing is wasted by first developing the practical cloud strategy.

Part II – Developing a Data Movement Foundation

Developing a cloud strategy first requires an understanding of the various cloud use cases like Backup, Archive, Disaster Recovery, Cloud Migration, Cloud Bursting and Advanced Cloud Processing. Most organizations will eventually want to take advantage of all of these use cases, which makes the next step in developing a cloud strategy – selecting a data movement solution – very critical.

In some cases, the organization will end up with more than one data movement solution since some of the use cases are so different, such as archive vs. disaster recovery. However, the organization needs to make sure that they are not deploying a different data movement solution for each use case, which would be difficult to manage and expensive to maintain.

Requirements for Cloud Data Movement

Moving data to the cloud requires efficient use of network bandwidth. It is necessary to copy data at a granular, sub-file level and to regularly update it and to protect data in the cloud, by using some form of journaled snapshots. It is also important to have the ability to move data out of the cloud and back to the on-premises data center when the disaster or peak load has passed.

Replication software is an ideal solution for the cloud data movement requirement. It can identify changed data at a sub-file level and move those segments to the cloud as needed. It also has the ability to take journaled snapshots of data in the cloud and manage the movement of data back on-premises.

Most traditional data centers use legacy shared storage arrays that support virtual and physical infrastructures. These systems offer some form of replication, but most vendors require that replication be targeted at another array from the same vendor. Essentially, both the source and the target need the same software. Most vendors have not created cloud versions of their storage software, so directly replicating to the cloud is not an option since each of the major cloud providers runs their own storage infrastructure. But since data centers typically have multiple storage systems from different vendors, the management of multiple replication processes to and from the cloud is untenable.

Instead, IT planners should look for software-based replication solutions that can move data from any source to any source. The any-to-any requirement also means “any cloud,” – requiring that the solution have a cloud version of the software that can run, not only in public cloud providers like Amazon AWS, Microsoft Azure, and Google Compute, but also more in regional providers running VMware or Hyper-V.

Another requirement is the data movement be continuous instead of batched-based. By moving data on a continuous basis to the cloud, the organization is sure to keep the cloud copy up to date and ready for any disaster or bursting requirement. Continuous updates also mean that less data is transferred with each update, making these solutions more optimal for the bandwidth-constrained cloud connectivity.

The data movement solution needs to be multi-directional. In the past, the multi-directional requirement was so that cloud changes would update the primary data center after a disaster or cloudburst had passed. Now, multi-directional also means the ability to update other clouds with specific datasets. Multi-cloud is not just about providing an additional layer of protection through replication but also to leverage advanced cloud services from whichever cloud vendor offers them.

Being able to instantiate an application in the cloud and being able to have data accessible by advanced cloud services is made easier if that data is stored in a native format. Many solutions, especially backup solutions, will store data in a proprietary format. It will be necessary to extract data from that format before it is usable by the cloud for disaster recovery or processing. Native source storage enables these processes to engage much quicker.

Finally, as the number of workloads leverage the cloud and these use cases increase, the organization will want to ensure the data movement solutions provide for automation and orchestration. The result is making recovery from disaster, managing through a peak load, or sending data to another cloud for processing, as much of a “push-button” experience as possible.

Zerto – The Multi-Cloud Data Movement Platform

Zerto’s IT Resilience Platform™ is a multi-cloud replication platform that enables fine-grained data movement between the traditional data center and the cloud, including across clouds by using it’s universal CDP engine combined with orchestration and automation. Organizations can start by simply using the cloud or a service provider as a place to copy data, as then allows that use case to expand so that disaster recovery can be cloud-based. For organizations new to cloud-based disaster recovery, leveraging a regional cloud service provider will give them more of a “white glove” service, making sure that more difficult configuration tasks, such as after-disaster networking, are worked out.

Zerto allows the organization to grow its cloud strategy beyond data protection and disaster recovery. Zerto’s offering can be used as a migration tool for applications that will be permanently cloud-based. Those applications can remain active in their on-premises state until the creation and updating of the cloud copy. Then the organization can cut over to the cloud instances seamlessly. The on-premises copy can be updated as the cloud instance is used allowing the on-premises copy to be a fail-back option.

Zerto also supports multiple cloud providers and can directly move data between them so the organization can develop cross-cloud redundancy and leverage the advanced cloud services unique to certain providers. In addition to Zerto certified cloud service providers, the solution supports Microsoft Azure, Amazon AWS, and the IBM Cloud.

Conclusion

A cloud strategy requires knowing the potential of the cloud and starting with the right foundation. Successful execution and long-term adoption of a cloud strategy requires a “crawl, walk, run” approach. The right foundation (data movement software) enables that approach. With it, the organization can start its cloud journey by making simple copies of data to the cloud and then gradually moving to the disaster recovery, cloud migration, cloud bursting and multi-cloud use cases.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,542 other subscribers
Blog Stats
  • 1,898,049 views
%d bloggers like this: