When IT professionals look to solve a storage performance or capacity problem the number of options available to them can almost be overwhelming. Trying to select the right solution to their particular problem seems almost impossible. They simply don’t have the time, resources or money to test every available storage system, so instead they are forced to rely on vendor supplied data to make their selections. Even if this data is accurate, it is seldom representative of their specific environment. As we discuss in our article “The Value of an Independent Storage Performance Testing Platform” Load DynamiX is a solution that can simulate and create I/O loads specific to the data center’s demands.
Much of the recent focus of performance testing has been on all-flash arrays and specific workloads, like databases. But in the real world IOPS consumption is the result of ‘composite’ workloads that involve multiple servers, each with a specific I/O demand. There is an increasing need for data centers to test database and other workloads operating in newer storage technologies such as OpenStack Swift/Cinder, Amazon S3 and NFS 4.1. Making sure these storage environments meet the data center’s performance expectations is just as important as a database environment. In their latest release, Load DynamiX addresses many of these realities.
Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), virtual server, virtual desktop (VDI) and other mission critical workloads are typically composite in nature. This means they have multiple systems all generating I/O, and sometimes the I/O patterns differ significantly between these systems. Virtual server and desktop environments are examples of composite workloads because they have multiple virtual machines (VMs) creating what is known as the “I/O Blender”.
With its latest release Load DynamiX provides the ability to create multiple workloads at the same time and execute them against one or multiple storage systems. This allows for the easy modeling of composite workloads. A storage architect can create this test with virtually no understanding of the complex environment that the storage infrastructure will be supporting, yet still generate a very real-world result.
We are often asked by IT professionals if OpenStack is ready for their enterprise. And the answer is, increasingly, “yes”. One of the challenges though as these enterprises begin their OpenStack journey is establishing a storage infrastructure to support the framework. In many ways OpenStack, because of its flexibility, also makes testing more complicated. In fact standing up the OpenStack compute layer to actually do the test can be a challenge in and of itself.
Load DynamiX has always eliminated the need to build the front end compute tier used to generate the I/O load for a test. Now they include the ability to simulate Ceph and OpenStack workloads. In this release they support Swift and Cinder, with plans to support Manila once finalized. They have added an intuitive configuration model to simulate object workloads. Via the Load Dynamix Enterprise GUI, users can create a wide variety of Put/Get commands to generate a test pattern that is very similar to the intended environment.
VDI and NFS Validation
Both VDI and NFS are becoming popular next steps for flash implementation and, as a result, the need to adequately test a storage system’s ability to support these environments is critical. Load DynamiX adds specific workload support for VDI and NFS 4.1 in this release, providing the ability to get very granular in how read and write traffic occurs, and also the size and type of those I/Os. There are also specific settings per workload that can be adjusted. For example, the VDI workload model supports pools, clones and data-store size simulation.
When a performance problem appears there is often a scramble to find a resolution, which can lead to organizations just throwing hardware at a problem based on a best-guess analysis. Load DynamiX allows an organization to prevent performance issues. With their appliance in place, organizations can predict just how far their current storage infrastructure will take them as well as adequately test new systems with accurate simulations of the organization’s exact workloads.