Fixing the Hybrid Cloud

Briefing Note: Velostrata

Many vendors and analysts have suggested that the hybrid cloud model is the best method for traditional data centers to adopt and leverage the cloud. But IT planners at traditional data centers remain unconvinced. There are simply too many concerns. IT professionals are still concerned about the risk associated with putting their data and applications in the cloud. Also, they now realize that while you may be able to buy compute on-demand, storage is constant. Finally, the sheer amount of time it takes to move even relatively small amounts of data makes a hybrid cloud strategy untenable for many organizations.

The Cloud is for Temporary Things

The fundamental problem is that the cloud is best for transitory activities, like compute that can be allocated and released almost on the fly. It is not good at more permanent things like the storing of the data that those applications need. The transitory strength of the cloud means that it is ideal for the temporary bursting of workloads when on-site compute can’t handle the load. It is not necessarily good for the long-term storage and archiving of data, especially if that data set is larger than a couple dozen terabytes.

The Hybrid Solution

The hybrid cloud is being looked at the wrong way. Most cloud gateways and appliances cache the most active data on-site in the organization’s data center and then store the older, less active data in the cloud. These solutions primarily use the cloud for what it is worst for; long-term storage of historical data. But they don’t use it for what it is best at, temporal compute, which typically needs access to only the most recent copy of data.

The solution is to reverse the model. Cache the most active data into the cloud so it’s only storing a small amount of data there. This way applications that are running in the cloud, or applications that are moved there to handle a peak demand, have ready access to the data they need. In essence, the data center becomes the place that handles the long-term retention of data.

Introducing Velostrata

Reversing the status quo is what the Velostrata solution provides. It decouples compute from storage, moving compute into the cloud, as needed, but keeping storage in the data center. The decoupling of data allows active data sets to be pre-positioned into the cloud or copied there quickly without performance degradation. As a result, data centers can move applications to the cloud and back again in minutes versus weeks or months with alternative approaches. The use cases for this type of solution are numerous:

  • Cloud bursting – leveraging cloud compute when peak loads occur, but the switch to the cloud is sped up thanks to Velostrata’s pre-positioning of active data in the cloud.
  • Disaster Recovery – There is no need to pay for and manage compute at the secondary disaster site. Compute may be spun-up in minutes if there is a DR event.
  • Cloud Hosted Applications – The cloud can also host the entire application, but have minimal cloud storage requirements. Cloud hosted applications can scale up and scale down as needed, but the data center stores and maintains control of the actual data.

Another key advantage of this type of solution is that it virtually eliminates cloud vendor lock-in. While many vendors will claim that you are not “locked-in” the reality is that the physical size of the storage footprint makes it very difficult to move data from one cloud to another. Because Velostrata only moves the most active data set to the cloud, which is a fraction of the total data set, boots an OS over the network in minutes, and does all the image adaption on the fly, movement between cloud platforms is very fast.

Most importantly, all of this movement of workloads is done within VMware’s vCenter console. No changes to the apps, images, or storage are required. A few mouse clicks can move a workload from on-premises to the cloud. Integration into a well-known console is a critical advantage since with it; every VMware savvy administrator is now able to create a hybrid cloud deployment with little additional training.

StorageSwiss Take

The current hybrid cloud model views cloud storage as a long-term repository for backup and archive data. This view is acceptable for businesses with less than 50TBs of data, but fundamentally breaks when the data volume surpasses 50TBs. The periodic billing of storing data at these capacities starts to add up and data centers can often provide the storage in-house at a less expensive price point. Velostrata is one of just a few vendors that share our concern but has potentially gone the farthest in solving the problem. They not only control what data is stored on-premises and in the cloud automatically, they also make the movement of workloads fast and seamless with their tight vCenter integration.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,221 other followers

Blog Stats
%d bloggers like this: