Storage professionals need a way to test and validate that new or proposed storage architectures will perform well in their application environments. Common storage testing “solutions” like ioMeter and Vdbench, however, aren’t going to cut it. While these tools enable storage planners to simulate various I/O workloads, they are extremely limited in their ability to simulate a real world environment and can lead you to make highly flawed purchase and deployment decisions.
Free but Useless?
While ioMeter and Vdbench are free, they have such significant limitations that they may have very limited value when testing for a real world data center. For example, most freeware tools cannot test metadata performance and they have limited abilities to test increasingly essential storage features like data deduplication and compression. These tools are also difficult to setup and maintain as they often require complex scripting to generate loads at larger scale. But the fundamental issue is that storage engineers have no real way to accurately model production workloads while using these tools, because there is insufficient capability of mapping the I/O patterns of the actual production applications in a specific data center. These limitations can lead IT planners to invest in storage systems that give a “false positive” and result in performance problems that could damage their reputation and lead to lost customers and lost revenue. Or worse, they may give up and be forced to rely on vendor benchmarks, which are often generated from the exact same flawed testing utilities.
Although many vendors still showcase their ioMeter results in their marketing and sales initiatives, buyers need to be wary of vendor-provided benchmarks. First, these tests are often performed under very controlled lab settings that are designed to produce best case results. Secondly, vendor benchmarks can’t emulate the unique workloads of your production environment. This last point is critical because it leaves storage planners in a quandary; either take the vendor benchmarks at face value and risk the possibility of poor performance or overspend by provisioning additional, unnecessary storage resources as a “just in case” safety measure.
One of the major challenges with storage testing is that it is a time intensive endeavor. Storage engineers first have to create customized scripts in order to fully utilize tools like ioMeter, ioZone, and Vdbench in an inadequate attempt to emulate production workloads. These scripts need to be frequently updated to keep pace with changes that occur in the environment, like operating system upgrades and new application version rollouts. But perhaps one of the biggest challenges with using ioMeter and Vdbench is that as freeware tools, they are loosely supported by the open source community of developers. New testing features are rarely released and it can be very difficult to get responses on testing questions or answers on how to fix bug related issues. In short, you are more or less on your own when utilizing these tools to conduct performance testing.
Controlled Benchmark Testing
The reality is that for financial and operational planning purposes, IT organizations need a way to predict how well a storage system will perform as workloads increase over time. And that testing should really be done on something that simulates the specific workloads and workload mix in that particular data center, rather than from some random set of tools that are totally disconnected from the environment.
Testing with models that accurately simulate production workloads, for example, can provide IT planners with the insights necessary to predict when additional resources will be needed to prevent a production storage system from hitting a performance wall. Vendor benchmarks simply don’t provide this level of insight and can’t find the true performance limits of the storage systems. Consequently, IT organizations need to perform independent storage testing to determine what their actual storage needs are to help mitigate the risk of service disruptions and to avoid needlessly overspending on storage purchases.
IT planners should look for testing solutions that can simulate their exact workload conditions and I/O profiles. Doing so delivers four key benefits that shifts testing from an ad-hoc on the fly task that has little real value, to something that is proactive and can save the organization money, allow for more accurate future storage budgeting and can help guarantee a positive end user experience.
#1 – Testing At Scale
Even in a best case scenario where an organization has the lab resources and personnel to conduct extensive storage testing, it is very difficult to perform testing at a scale that accurately reflects production level workloads. Anything short of a complete replica of the production environment transaction load will fail to produce test results that are truly meaningful. And of course, scaling a lab to that degree is typically not a viable financial option for most organizations.
Despite these limitations, organizations simply can’t abandon storage testing altogether. Instead, they need ways to automate and simplify the storage testing process, but more importantly, they need ways to simulate their production workloads accurately. In other words, a solution that can simulate highly accurate workload profiles. You need to model production workloads and run them, in a consistent and repeatable method, against various storage systems to determine what an organization’s actual storage needs are. This could enable businesses to only spend what they need to spend and it would greatly increase the confidence that a particular offering would meet the storage I/O performance requirements of critical applications.
#2 – Storage Decision Support
But it would also be important for the testing platform to have the capability to iteratively scale workloads to test the I/O limits of a given storage subsystem. Granular workload modeling provides IT planners with the insights they need to make better storage purchasing and deployment decisions. For instance, they could test whether their applications could gain a performance benefit from an all-flash storage array or whether a hybrid system would suffice. This could help organizations save massively on storage acquisition costs while also helping to ensure that more expensive storage resources, like flash capacity, is efficiently configured on new or existing arrays.
In addition, the ability to iteratively stress test a storage system can help storage planners estimate when a certain storage configuration will need to be upgraded with additional resources. This would make the storage refresh or upgrade process more predictable and could help mitigate the risk of application service disruptions resulting from a sudden shortfall in available storage I/O performance.
#3 – Testing Instead of Planning
Bringing increased automation to storage testing could also reduce the need for all of the meticulous planning and preparation work that storage engineers have to perform to support testing activities. In some cases, engineers can spend upwards of 80% of their time planning and preparing for storage tests while only spending 20% of their time actually conducting the tests themselves. This imbalance severely limits how frequently tests can be performed and as discussed above, the test results may not even be valid since the modeling lacks realism and production scale.
#4 – Automated Testing Support
Automated storage performance testing obviates the need for engineers to generate new scripts or update existing ones to perform storage tests. As a result, more of their time can be spent conducting tests instead of preparing for them. This can help facilitate more frequent tests to ensure that any change to the storage infrastructure, whether a new array is introduced, new storage software features are added or a microcode upgrade is applied to a networking device, will not disrupt production application performance. Furthermore, by building automated testing into the change control process, businesses can reduce risk and help ensure that application service levels will be maintained regardless of any impending change or changes to the environment. This can boost IT’s level of confidence in understanding how certain architectural changes will impact the environment and as a result, lead to better overall decision making.
Freeware tools like ioMeter and Vdbench need to be sunsetted, or at least relegated to being used by hobbyists. These tools once served a valuable purpose but now with the proliferation of virtualized application infrastructure, flash-based architectures, tiering, storage virtualization and new storage architectures based on OpenStack and Ceph, there is new need for a more advanced and comprehensive storage validation and testing paradigm. A single, integrated performance validation and testing solution that supports all storage technologies, protocols and architectures is the ideal solution.
Professionally supported automated performance testing solutions like Load DynamiX are making it possible for storage planners to accurately assess and specify the right storage solution at the right time for their business. Such products offer tremendous scalability, support the latest storage technologies like dedupe and compression and offer advanced reporting and analysis tools to simplify the storage engineering process. This could lead to improved storage ROI, reduced TCO, assured performance in critical application environments and reduced business risk.
This Article is Sponsored by Load DynamiX