Many Cloud NAS solutions leverage a local cache to get around the latency problem when retrieving data from the cloud. If the requested file is on the local cache the user will notice no performance difference than before they had a Cloud NAS. But, what if that file isn’t on the local cache? If the file is small and the access to the cloud is fast enough then in most cases the Cloud NAS solution will still suffice. Users may notice a slowdown but not much of one. But, what if that file is large, several Gigabytes for example, then the cloud access goes from a minor annoyance to a major problem?
Transferring larger files is not only about bandwidth it is also about latency. Even in large file transfers, latency matters because of the amount of CRC checking occurring during the network transfer of a large file. The other type of transfer that will cause concern is the transfer of a very large number of very small files, where latency is even more critical.
Most Cloud NAS solutions leverage an all-flash caching appliance on-premises and then a hard disk drive based cloud target. The problem is not necessarily the type of storage chosen for on-premises and in the cloud. More than likely it is the distance between the provider and the primary data center where the caching appliance is located. Cloud NAS solutions, just like everything else, must obey the law of the speed of light.
How Cloud NAS Can Win
Cloud NAS can solve the speed of light problem and be able to service non-cached large file accesses by establishing a cloud point of presence closer to the primary data center. The Cloud NAS solution needs to create a three-tiered architecture; the on-premises appliance, the regional point of presence and the primary public cloud provider. Data can then move between these three as it makes sense.
In most cases, the organization will likely decide to keep 100% of their data in the regional cloud provider and 100% of their data in their in a cloud provider like Amazon or Google. That way the solution can service any on-premises cache miss within milliseconds from the regional cloud. This three-tier architecture also allows the organization to implement a smaller (in terms of capacity) on-premises appliance since the penalty for a cache miss is not as severe.
The Data Protection Bonus
Another challenge facing large file workloads is data protection. The size of these files put a strain on backup products, making even incremental backup jobs large and time-consuming. The organization is forced to use image-based backup techniques, which means a loss of file granularity. A three-tiered Cloud NAS architecture essentially includes backup. Cloud NAS snapshots and cloud replication effectively protects data from harm without impacting production applications.
Large file accesses or accessing too many small files are particularly problematic for Cloud NAS solutions. While they can force the caching of the data locally on the appliance, doing so raise the capacity and the cost of the on-premises appliances. A three-tier model allows the organization to purchase a smaller appliance and then leverage a regional cloud provider when there is a miss. This architecture opens the way for Cloud NAS to drive down storage costs to the point that it is truly less expensive than on-premises alternatives.
To learn more about replacing on-premises NAS with Cloud NAS check out our on demand webinar “NAS Refresh? – 5 Reasons to Consider the Cloud”. Also included in the registration is access to our exclusive white paper, “Not All Cloud NAS Are the Same – Understanding the Difference Between Cloud NAS Solutions”, exclusively available in the attachments section of the webinar.