The cloud, both compute and storage, is appealing to IT administrators because they buy the cloud “as a service.” That means an organization can gain access to compute and storage resources as they need them and only when they need them. But the cloud isn’t the only storage platform that works in the “as a service” model.
Flash storage, for example, is an ideal candidate for the “as a service” model. Most applications only need the high performance capabilities of flash for moments in time, not forever. The ability to apply flash when you need it and then to stop using it when you are done is an ideal use case. Why then haven’t we seen flash as a service (FaaS) appear every where?
Latency Strikes Again
The issue is latency, and it continues to be the cloud’s biggest nemesis. Latency is the time it takes a packet of data to get from the data center to the cloud provider, all the bandwidth in the world won’t make that part of the data transfer go any faster. It is a speed of light issue. The induced latency make flash in the cloud less practical but there are ways to leverage the flash and the cloud.
Flash vs. Cloud – Can’t we all get along?
Obviously job number one is to get rid of the latency distance causes. The most obvious solution is to move the application itself to the cloud and then it could have local access to cloud-based flash storage. The problem with moving an application to the cloud is that converting it to be “cloud ready” may be a huge undertaking the organization may not have the time or skill set to undertake.
Another option is to use an on-premises flash appliance that allows access to the most active data without dealing with cloud latency. As data ages it is stored exclusively in the cloud. That clears more space for on-premises storage needs.
There are two problems with this approach. The first is the flash part of the investment is not really as a service, it is bought up front so the organization essentially owns it. The second problem and potentially more concerning is what happens if there is a cache miss? In that situation the latency of the cloud is re-introduced. Of course the organization can reduce the instances of a cache miss by purchasing extra on-premises flash. But that merely exacerbates the first problem.
A third option is one that we discussed in our recent webinar, “5 Reasons Primary Cloud Storage is Broken and How to Fix Them”, now available on-demand. It still has an on-premises flash appliance but leverages a more local point of presence to store less active data. The result is the reduction of latency goes down to a few milliseconds and the cloud stores dormant data for long term retention.
A final option is one that we discussed on another webinar, now available on-demand “Flash-as-a-Service – Achieve Flash Performance with Cloud Economics“. It focuses more on the business model and provides a way for organizations to purchase flash storage only as they need it and while they need it but to have that storage on-premises. All access is local.
Which option makes the most sense for your organization? As is always the case, it depends. Mostly with FaaS it depends on how predictable your data accesses are and how sensitive you are to the latency induced by a cache miss. Another factor is, how badly does the organization want to minimize on-premises storage? We encourage you to listen to each of these webinars to see which makes the most sense for you and your organization’s goals.