A 3 Step Cloud Storage Strategy

The desired final step of a cloud strategy is to use the services available in the cloud to the fullest extent possible. The problem is for most organizations the first step, leveraging the cloud for backup or archive, which typically means that the data sent to the cloud is stored in a proprietary format or a volume that the cloud services can’t access. When embarking on a cloud strategy IT planners need to be mindful to make sure the steps they take along the way do not make it difficult to achieve that final result of leveraging cloud services.

Step 1 – Start with On-premises Services

As most data an organization has is local and not in the cloud, the first step to the cloud is to create a cloud storage platform on-premises. A system that both legacy applications and new modern applications can utilize without having to refactor applications. For most organizations, this means buying an object storage system that supports both legacy protocol access like NFS and SMB, as well as S3 support.

It is important that organizations don’t treat these protocol requirements as checkbox items. Object storage system vendors provide very different levels of support of them. For example, legacy storage protocol support is often enabled by using a third party gateway, that does the conversion from NFS/SMB to Object. These gateways are not an ideal architecture, and can present scaling challenges performance bottlenecks and they, of course, add an extra cost to the object storage solution. There is also an issue of support. If a problem arises between the two systems, finger pointing may ensue.

The same goes for AWS S3 API support. The organization wants to make sure that the system fully supports the S3 API. Also, the organization wants to make sure that the interface between legacy and S3 protocols is seamless. Ideally, the system should support simultaneous access to the same data from any of the protocols.

Step 2 – Move to the Cloud

The next step is to move to the cloud. Here, the object storage solution should provide the ability to synchronize data to the cloud based on policies. For example, some organizations might want to replicate all their data to the cloud, while others might want to move just old, archive, or infrequently accessed data to the public cloud. The important capability is, again, that simultaneous access to data, which means that a cloud service that is expecting data stored in an object store can access that data via the S3 protocol.

If the object store software also adds the ability to manage a global namespace, then it can treat both the on-premises private cloud and the public cloud storage as one entity. A global namespace means that an on-premises application can access data from either location seamless without modification.

Step 3 – Go Multi-Cloud

The final step in the process is to go multi-cloud, essentially synchronize data between clouds. Multi-cloud capabilities give the organization the opportunity to take advantage of services unique to certain providers or take advantage of better compute or storage pricing. Organizations have multi-vendor strategies for infrastructure like servers, storage, and networking, it is only natural that they would continue to have multiple sources for cloud IaaS.

The key ingredient is how to implement that multi-cloud strategy. IT planners should look for a product that provides a single namespace, and using policies to drive data placement, the location of data, whether on-premises or in any cloud provider is transparent to the application and users. Ideally the system would also provide restoration of old data that is accessed in the cloud to the local data center, where it remains until an aging policy moves it back to the right cloud.

StorageSwiss Take

Starting with private cloud storage on-premises is an ideal beginning to an organization’s cloud journey, but that start has to be compatible with the long term vision. IT planners need to look for object storage systems as part of a data management solution that can support legacy protocols while providing simultaneous access to modern protocols.

To learn more selecting the right object storage system for your cloud journey, watch our ChalkTalk Video, “ChalkTalk Video: How To Develop a Cloud Storage Strategy That Works.”

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,447 views