Many organizations are adopting a “cloud-first” methodology. This means the first destination an organization will consider when looking at a home for a new application is whether the cloud is a viable candidate to host that application. If the cloud seems viable then IT will often start development work directly in the cloud, with a mindset to always keep the application there. This cloud first strategy presents three challenges that organizations must consider.
The first challenge for cloud hosted application storage is that the cloud first strategy essentially ignores the cloud’s potential for existing applications. Second, it requires that developers design the new application solely with cloud constructs in mind and ignore legacy access models. Third and finally, it makes it difficult to “de-cloud” these applications if the organization decides later that it might want to host the application themselves.
Legacy Apps Need Legacy Protocols
Legacy applications can certainly reap the same benefits from the cloud that new modern applications enjoy. But moving a legacy application to the cloud is may require a major rewrite in order to fully take advantage of cloud constructs. One of those constructs is, of course, the storage architecture.
Leveraging Legacy Protocols for New Age Apps
Even if a developer is writing a new app from scratch with the cloud in mind, the developer may want to use legacy protocols. The developer may be more comfortable with those protocols or may have external applications that need to access data the new applications create and the external applications may not have native cloud storage support.
De-clouding an Application
Finally, the organization may decide that an application should move back on premises because of security reasons or cost concerns. There are an increasing number of organizations who find out the upfront cost savings of the cloud go away because of the long-term periodic costs and fees for recalling data. Whatever the reason, the organization may want to bring that application back in house if the on-prem data center needs to provide cloud storage protocols, typically S3 or native object storage. Alternatively the organization, especially if it thinks the chances of bringing the application back in-house are high, may design it with standard NFS or SMB protocols.
Cloud Storage Protocol Flexibility
There are solutions available that will provide legacy storage protocol access so that these applications can be more easily migrated to the cloud, as well as making their return to the data center seamless. With these solutions the application sees an NFS or SMB share just like it did when is was on premises. Providing legacy protocol access requires something different that an on-premises solutions that we discussed in our last blog. Now the cloud gateway or Cloud NAS has to also be in the cloud, and then sit in front of cloud storage to present an NFS or SMB share that it then translates to S3 or native object storage.
Intelligent Use of Cloud Storage Tiers
Another key consideration, and what separates a Cloud NAS from a Cloud Gateway, is the ability to leverage the tiers of the cloud. Most cloud providers today have at least two types of storage performance offerings, most have three. The offerings can range from RAM to Flash, to performance hard disk, to capacity hard disk and even to tape, in some cases. A Cloud NAS has the ability to traverse these class of storage offerings and properly balance high performance with cost effectiveness.
Most organizations assume cloud-hosted applications have to use native cloud storage (S3 or object) instead of more traditional file storage protocols like NFS or SMB. This assumption limits the applications that can move to the cloud as well as makes exiting the cloud more difficult. Instead organizations need to look for Cloud Gateways or Cloud NAS solutions like the one we discuss in our prior entry that can be instantiated in the cloud and provide NFS/SMB access to cloud-hosted applications.