A key component of most organization’s cloud strategy is migrating applications to a cloud provider to reduce dependence on on-premises infrastructure. The organization may want to host the application entirely in the cloud, use the cloud as a failover point for the application or use the cloud to bring additional compute resources to the application when demand exceeds data center capacity; a situation known as cloud bursting.
The Cloud Migration Problem
While many organizations will see some level of success using the cloud for data protection, most organizations fail completely at the application migration part of their cloud strategy. Those that do make it through application migration find it a much more time-consuming process than originally planned.
The primary difficulty is how to lift and shift applications that are already running in the data center. Most businesses see rewriting their existing data center applications to be cloud native as their only option. Rewriting the application is time-consuming and very expensive. It also means that the application must be completely requalified for stability and functionality.
Another challenge with rewriting applications is that it makes creating a hybrid strategy more difficult. A hybrid approach is necessary to use the cloud for disaster recovery and for cloud bursting. With a rewrite strategy, the organization needs to maintain a legacy on-premises version of the applications and a cloud native version of the applications. The organization could also create a cloud-like infrastructure in their data center, but that is even more costly, requiring the investment in compute, cloud management software and cloud-like storage.
Migration is Just the Beginning
Migrating applications to the cloud is just the beginning of an organization’s challenges. Next, the organization needs to manage how applications consume cloud capacity and they need to make sure it consistently delivers the right level of performance and cloud storage. Instead of managing data, most organizations put all their application data on the highest performing (and most expensive) tier of cloud storage available. The reality, just as it is in the data center, is that this strategy is costly and inefficient compared to moving aging data to appropriate, less expensive cloud storage tiers when access to that data subsides.
Four Simple Steps to Lift and Shift with a Cloud Data Fabric
In the same way, IT can use the Cloud Data Fabric (CDF) to ease and enhance data protection with the cloud, IT can also use the CDF to improve an organization’s success rate and flexibility when migrating applications to the cloud. The CDF creates a POSIX compliant file system in the cloud that can provide both object and block storage access (via various storage protocols like: NFS, SMB/CIFS, iSCSI). With the CDF as a foundation, IT can move on to the next step of migrating the applications unchanged, to the cloud.
In the migration step, whether rewriting the applications or leveraging the CDF, IT has to deal with the reality that bandwidth is not limitless. Unlike the rewrite method, which provides no assistance, the right CDF solution can assist with intelligent high-speed bulk transfers. Since a CDF is in place both on-premises and in the cloud, it owns the transfer method. It can optimize transfers over the WAN and move data at a greatly accelerated rate than traditional TCP/IP protocols can handle.
After migration, the CDF also supplies continuous sync of the data between the data source and the data in the cloud, which is vital for the disaster recovery and cloud bursting use cases. The feature insures rapid application start since the latest copy of data is already in place.
Once the application is in the cloud, IT needs to ensure acceptable performance to its users. At the same time, IT needs to make sure it consumes cloud resources wisely and cost effectively. It is critical that the CDF can support automatic storage tiering of data between the different cloud storage types. The CDF needs to ensure that the most active data is on the best performing storage tier while also making sure that the less active or older data is accessible but on cost effective, high capacity storage tiers.
The final step is to secure the data, both on-premises and in the cloud. Most organizations use either Active Directory or LDAP for authorization, the public cloud by itself, does not. The lack of common authentication controls and management of those controls, means that the IT administrators need to learn a new authentication method and two ways to manage access to data. The CDF uses Active Directory or LDAP natively, meaning that the move to the cloud requires no special cloud authentications; it just uses the same model that IT is already using in the data center.
Another aspect of control and security is making sure that data is unreadable if there is a breach (like the past AWS data breaches). With a cloud rewrite, the application needs to either implement its own data encryption algorithm and key management or add a third party solution and manage that. However, a CDF includes built in encryption, for both for data at-rest and in-flight.
Conclusion
Migrating applications to the cloud is difficult because cloud storage requires a new interface for the application to read and write data. The new interface means costly redevelopment and substantial investments in new ways to transfer, manage and secure data. These investments make a CDF a compelling alternative since applications can move natively to the cloud and the CDF accelerates transfers, keeps data synchronized, manages data placement, and simplifies the implementation of tighter security.