Any Cloud Strategy Requires Understanding On-Premises Workloads

Is a Cloud First Strategy Right for IT?

The cloud is a resource that most data centers are trying to factor into their future IT planning. Early adopters have found though that not all their workloads work properly, or at least not optimally, in the cloud – both from a performance and cost perspective. Despite the hype around a “cloud first” strategy, the reality is that organizations need to understand their application workload IO requirements first, instead of blindly assuming the cloud will cure all woes. With knowledge of application workload characteristics and performance requirements in hand then IT is empowered to decide where to place workloads, be it the cloud or one of the many available on-premises or collocation options.

The Cloud is Just another Resource

Instead of considering the public cloud as the be all and end all cure for IT woes, the data center has plenty of available on-premises options that it should consider first or at least in parallel, like all-flash arrays, hybrid arrays, hyperconverged infrastructure and object storage. Additionally, each of these options has multiple methods for data access like block, NAS and S3. Interestingly, cloud providers also have a variety of storage and compute tiers so understanding the requirements of workloads is just as critical for cloud deployments. As a result, understanding application workload profiles will enable IT to place those workloads not only on the right platform today but also in the future.

Workload Profile Creation – Real-time Analysis vs. Static Reports

It is a challenge to determine what exactly is the workload profile. Most IT professionals are forced to look at current IO consumption rates like IOPs and then manually test or simply use best guesses about future systems, be they on-premises or in the cloud, to see how much more of those types of IO the new platform can sustain.

The problem is when testing the potential new platform, the organization seldom uses their installed production workload. IT has two options, either create a test environment that attempts to simulate production or use a workload generator or benchmarking tool. Creating a lab environment that completely replicates the production environment is expensive and IT is seldom able to build it out to the size of the full environment. It is also difficult to generate the amount of IO or simulate the IO patterns that production versions of the workloads create. Using a basic freeware IO testing tool is even worse since the testing tool, or benchmark typically has minimal similarity with the actual workload.

There is an alternative however, workload IO capture and playback. Companies like Virtual Instruments have created a software and hardware solution that will capture a workload’s IO workload pattern over time and then play back that IO pattern from a single appliance against any new networked storage system or storage platform candidate. These candidates include both on-premises and cloud-based options. The company also offers a Cloud Migration Readiness service that can assess the suitability of any workload for a public or private cloud deployment.

The On-Premises Value of Workload Placement

IT professionals have a lot of options available to them as to the type of server and storage system they should use for their workloads. Most data centers already have a variety of system choices implemented in their facilities. Additionally, there are newer, faster systems becoming available each day. Knowing where to place their workloads today within their current infrastructure provides the ability to maximize its use case. IT professionals can save the all-flash array performance and capacity for workloads that genuinely need it instead of blindly putting everything on the most expensive system. As the next generation of NVMe All-Flash arrays come to market, IT, armed with the exact understanding of their workload profiles can determine if their applications can take advantage of the potential performance increase.

New workloads will continue to come online. Understanding the IO profiles of current workloads and how the data center’s storage systems IO resources are consumed enables IT professionals to determine where to place the new workload. The first point of workload placement may be an all-flash array. Then as the workload generates real-world IO, a profile is created. With the IO profile, IT can then decide if it should move the workload to an alternative storage platform that may be less expensive or offer higher performance.

Workload IO profiles also change over time. When an application first comes online, usage may be slow as users convert over to it. Then it can become very IO intensive as it reaches its peak, and then eventually most workloads will be replaced or just not used as frequently. The ability to analyze workload profiles continuously enables IT to shift them to either cost correct or performance correct systems as needed.

The reality of new workloads coming online, and the lifecycle of a workload means that workload profiling is a capability that IT needs to run as a continuous process, not something it subscribes to once a year.

The Cloud Problem

Organizations face challenges as they start to look to move some or even all of their applications to the cloud. The first challenge is which cloud provider should they choose. While cloud providers all look the same on the outside, it is clear that providers differ in performance capabilities, data protection, reliability, support capabilities, and of course costs. The next step is to understand which tier within the cloud provider they should use. Each of the mega-cloud providers has at least three tiers from which to choose. These tiers have different performance capabilities, vary in cost for capacity and differ in the price to move data out of the cloud.

Once workload profiling is performed so that the demands of the workload are understood, it needs to test that workload against various cloud configurations. Testing is potentially the most problematic part of a cloud strategy. IT has to make sure the workload will perform as expected under the selected cloud configuration. There finally are viable tools to help in this process as noted previously.

While it is easy to allocate CPU resources to the testing task, testing IO performance is difficult. To test a workload a production copy of data has to be migrated to the cloud. Migration is a time consuming and expensive process. The test may also require a continual refresh of the data first copied to the cloud. Finally, once CPU and data are in the cloud, the IT professional still faces the same test execution problem, how do they simulate the random interaction with the workload by users?

The Cloud Value of Workload Placement

The data center that has a clear understanding of the IO profile of all their workloads has a tremendous advantage when it comes to deciding which workloads to move to the cloud as well as which cloud provider and which cloud storage tier the application should use. Since it knows precisely the IO profile of each workload, has simulated those workloads in the cloud, it can more accurately match that workload to the appropriate cloud provider and storage tier.

Once the workload generation software is on a virtual machine in the cloud, it can then play back the exact IO profile. Having a cloud-based workload simulator allows the organization to calculate optimal CPU, network, and storage resources accurately. It also means that the organization doesn’t need to transfer terabytes or petabytes of data into the cloud, saving the organization time and money.

The ability to start a workload test in the cloud quickly, allows the organization to test multiple permutations of cloud configurations efficiently, including various cloud providers and cloud storage tiers. In short, the organization can find the exact cloud combination that delivers the performance results required by the workload. It can also determine ahead of time if the cloud is the right place for each application.

Conclusion

Understanding application workload profiles need to be a foundational capability for every data center. Having this knowledge allows it to make better infrastructure deployment decisions and investments regardless of where that investment might be, on-premises or in the cloud.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,442 views