Is a Cloud Gateway Enough?

The ability for a storage system to store data in the cloud via an NFS or SMB connection – often referred to as a cloud gateway – is becoming table stakes. At this point in IT infrastructure development, we still need this translator to allow legacy applications to utilize object storage whether local or in the cloud. The issue at hand is whether such a gateway can do more than simply translate between NFS/SMB and object storage.

One challenge with traditional cloud gateways is they tend to think about “the cloud” as a single place. You use them as a gateway to send files and objects to and from Amazon Web Services (AWS), or to and from Azure. But the cloud isn’t a single place; it is composed of many types of storage and compute offerings, which includes private cloud resources. While you may like AWS and Azure today, what happens if tomorrow you decide that Google Cloud Storage is more appropriate for you? Or what if you decide that certain workloads are better served by your own private cloud? Even more important, what if you decide that your data would best be served by a hybrid approach of using a combination of multiple cloud vendors and a private cloud? Many cloud gateways would be unable to combine those into a single pool of storage or namespace. If you could seamlessly move data between your various cloud service providers, you could really take advantage of all of the advantages of the cloud. But if your cloud gateway can’t view all of the resources from one point, you wouldn’t be able to do that.

Another question to ask is whether the cloud gateway takes advantage of the fact that cloud storage can be accessed from the cloud itself. For example, some systems that may want to access and process your data could require significant compute power that is not available in your data center. Better yet, perhaps it only needs this significant compute power a few weeks per year. You could provide this equipment on site, but it would go unused the rest of the year. Such a workload is perfect for a public cloud computing system such as Amazon Elastic Compute Cloud (EC2). A cloud gateway could make it possible for applications running in EC2 to access data stored in the cloud, allowing you to use EC2 to run compute intensive processes against your data.

Finally, what about leveraging your onsite infrastructure? Most cloud gateway systems do not have the ability to talk to on-site equipment such as a large filer or object storage system. Again, there may be workloads where having a local copy of a particular set of data would be very valuable and being unable to leverage your own hardware in the same global namespace limits the value of the gateway.

StorageSwiss Take

There is more to being a cloud gateway than simply translating NFS and SMB to the S3 API. Putting data into the cloud makes other things possible that simply cannot be done without significant cost and effort. Creating a single global namespace across multiple cloud resources can increase functionality and decrease cost. It’s also important to be able to leverage the compute capabilities of the public cloud as well as the compute capabilities of onsite equipment, and a cloud gateway could also enable this by simply moving data to either location for short-term processing of compute intensive workloads.

W. Curtis Preston (aka Mr. Backup) is an expert in backup & recovery systems; a space he has been working in since 1993. He has written three books on the subject, Backup & Recovery, Using SANs and NAS, and Unix Backup & Recovery. Mr. Preston is a writer and has spoken at hundreds of seminars and conferences around the world. Preston’s mission is to arm today’s IT managers with truly unbiased information about today’s storage industry and its products.

Tagged with: , , , , ,
Posted in Blog
3 comments on “Is a Cloud Gateway Enough?
    • jon reeves says:

      One of the issues with Cloud Gateways is that it’s often a “sledgehammer to crack a nut”. Implementing them often requires radical infrastructure changes and a wholesale move to the Cloud and once those changes have been made, they’re not much more than a BC/DR engine. Once file data has gone into an object store it’s not able to be identified and getting it out can be expensive. This may be why Cloud Gateway vendors add extra bells and whistles to enable users to keep and maintain active working sets whilst the rest is effectively archived off to the Cloud. As you point out, some users don’t want a wholesale move to the Cloud, or they have a lot of money invested in existing Storage infrastructure they still want to use, so something else is needed.

  1. Terry says:

    This sounds like the Hitachi content platform.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,957 other followers

Blog Stats
  • 1,326,561 views
%d bloggers like this: