Cloud computing and cloud storage certainly get a lot of headlines but how many IT departments are actually using the cloud in any serious, strategic manner. While most surveys of actual cloud usage are quite high per organization, these surveys are often misleading. The survey question is generally phrased along the lines of “are you using the cloud for anything” not “are you using the cloud for everything” or “how much do you leverage the cloud in day to day operations?” or “are you using the cloud for anything important”. If the latter questions were asked, we think the results would reveal significantly less usage than other surveys report.
Opposite the cloud fanboys are many pundits that are simply “cloud haters” and are quick to point out that anything associated with the cloud is bad and risky. They are partly right, the cloud is not some magical place where unicorns live and systems never go down. A cloud provider runs a data center that other businesses can borrow. These providers do, in theory, have the advantage of being solely focused on IT and one would hope that they are good at it. All I can offer is the sage advice to “trust but verify”.
Traditional IT needs a strategy to use the cloud in a sane, rational way. One that makes sense leverages what the cloud is good at and avoids what the cloud is bad at. Logically this strategy should also coincide with what the data center is not good at. There’s no sense using the cloud if the project can be done better by internal IT. In other words, don’t use the cloud just for the sake of using the cloud.
Cloud Based Application Lifecycles – The Logical Way to Use the Cloud
One of the big challenges IT has traditionally had is responding to a need for the environment to make accommodations for a new application. A new application often requires servers, storage, and networking. Typically, the application’s requirements are set and product is purchased to the projected needs of that application when it becomes production.
There are two basic problems with this approach. First, any application is going to take months, if not years to be ready for production. This means all the compute, storage, and networking specified upfront goes essentially unused during the development cycle. Then, even as the application moves into production, it rarely sees full utilization on day one. Again, it may take months for the application to see anywhere close to full utilization. It is important to point out that in the months or years that it takes for the application to fully utilize the infrastructure environment, the hardware will be far less expensive.
These development and early production stages are ideal use cases for the cloud. Compute, storage, and networking can be rented short term, as needed in the cloud. This means no capital outlay for the organization and limited operational outlay.
Once the development work is done and as production starts to ramp up on the application, then hardware can be purchased and the application can be moved to the on premise data center. This provides IT with maximum control and should be less expensive than the cloud. The equipment can be bought with a better understanding of what production workloads will be.
Eventually, most applications hit a legacy phase. In this phase, the data within the application is still important but the application is not used heavily on a day to day basis. At this point, the organization may choose to shift the application back to the cloud where it can run when it is needed, but only when it is needed. As was the case in the development phase, the cloud model of paying for CPU cycles as you use them is also ideal when an application hits the legacy phase. In essence, the application could sit dormant for months at practically no cost to the organization.
Cloud Bursting and Cloud DR
The other advantage of cloud based application lifecycles is that the foundation has been laid for two other practical cloud use cases; Cloud Bursting and Cloud DR. If the application for some reason becomes overwhelmed with use, it can be temporarily moved back into the cloud until the peak load has passed. The organization can then make a decision as to whether to acquire more equipment internally or simply continue to use the cloud when peak demand arises. Cloud bursting could save the data center an immeasurable amount of CapEx. It allows IT planners to buy IT equipment for normal operating conditions rather than peak conditions.
Cloud DR could save the cost of building a secondary data center with idle equipment always at the ready. If a disaster occurs, the application can simply be shifted to the cloud and run there. This would require some advanced planning making sure a recent copy is always copied to the cloud and on standby in the event of an unplanned outage.
In both cases, the big question “will it work when we need it to” is answered with a yes because the application was developed and initially ran in the cloud. While there are some scaling issues to confirm, the basic question of will it work has been answered. Also the movement of the application has also been tested when it was moved on premise for production. Again, while there may be ways to improve movement back and forth, the basic question of migration has been answered.
The Tools Required For Cloud Based Application Lifecycles
Of course the movement of this data in and out of the cloud is going to require special skills and tools. There are few vendors that Storage Switzerland feels are out in front of this concept and are well positioned to deliver cloud based application lifecycles. In our next few entries, we will detail a few of these so you can decide which one is best for you.