What is Workload Modeling?

Workload modeling is the process of capturing the IO patterns of a workload so that it can be played back later in a different environment. The goal is to provide a very close simulation of how that workload will respond to changes in the infrastructure in which it operates. Ideally IT planners, armed with a model of their workload, can forecast when the current environment won’t sustain additional growth in the workload’s IO requirements. It also enables IT to test new environments without having to recreate the entire workload.

The challenge is in capturing the information needed to build the workload profile. Without an accurate workload modeling capability IT is forced to either manually capture workload characteristics and try to simulate them using other tools or it must ignore workload modeling and recreate as much of the workload’s infrastructure as possible in a test scenario.

Workload Capture Challenges

Capturing a model of a workload is very difficult to do manually. The IT planner must capture a continuous stream of data from the physical server layer, the network layer and the storage layer and then somehow correlate that information. If that correlation can be made then the IT planner must find a way to replay that information on new systems. Most testing tools don’t support the variability of input required to truly simulate the workload’s IO pattern.

Another challenge is workloads continuously change. The IT planners need to constantly update their models to make sure they are testing with the most current version of the workload’s IO profile.

Recreating a test environment that approximates the workload’s infrastructure is also very difficult and expensive. Equipment must be purchased and dedicated to the task. Some enterprises may have a test environment that is designed to test multiple workloads at different times. While this approach saves on hardware expenditures it adds further complication and more than likely means that the organization can’t simulate more than one workload at a time.

Even if all the right hardware and software can be purchased for the test environment, IT can’t really simulate production use of the workload because it doesn’t have real users’ login and use the test environment for an extended period of time. In most cases they use a series of scripts to simulate user activity.

Creating a DVR for Workloads

What IT needs is a DVR like capability for workload capture. This solution can run inline with the workload and capture information about its IO pattern in realtime. The DVR can also capture IO profile information from the logs of the various storage systems it is currently using. Since the workload profile is captured seamlessly and continuously it is always up to date as the workload changes. The “recording” of the IO profile can later be played back by an appliance directly to new storage systems and storage infrastructures.

Workload modeling is useful for more than just evaluating new technologies. The workload modeling solution could have “knobs” that allows IT to turn up certain IO intensities so that users can simulate growth and measure that growth’s impact on the storage infrastructure.

Workload Modeling and NVMe

The biggest challenge facing IT is testing new technology as it comes to market. One of the newest technologies is NVMe and NVMe over Fabrics (NVME-oF). These technologies promise to improve performance and lower latency by replacing SCSI with a new protocol that features a larger command count and deeper queue depth. In the case of NVMe and NVMe-oF, one of the challenges is that IT now has two sets of tests to make; how will NVMe flash arrays impact the performance of their workloads and how will NVMe-oF networks benefit their workloads. In many cases organizations will want to know if an NVMe Flash array or if an end-to-end NVMe fabric will enable it to consolidate several or even all workloads onto a single storage system. Workload modeling allows the organization to test that exact scenario.

In our on demand webinar, “Does Your Data Center Need NVMe?”, Virtual Instruments and SANBlaze join Storage Switzerland to discuss the challenges associated with moving to NVMe and how to make sure your workloads and your infrastructure is ready for it.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,542 other subscribers
Blog Stats
  • 1,897,449 views
%d bloggers like this: