
If you noticed VergeIO’s most recent press release, they reported their performance results (1 Million + IOPS) based on 64K block-sized tests—breaking away from the more common but less realistic 4K block-sized tests. This shift offers a better representation of what modern workloads demand, ensuring results that organizations can rely on.
While 4K block sizes have long been the industry standard for storage benchmarks, they do not accurately represent the requirements of modern workloads. IOPS comparison is not perfect nor should it standalone but IOPS testing using 64K block sizes, offer a far better approximation of real-world demands.
This blog explores why block size matters, the importance of realistic testing conditions, and the necessity of detailed benchmarks to help organizations make informed decisions about their storage solutions.
Why 64K Block Size Is More Relevant
The 4K block size is a legacy metric that was originally designed for low-level hardware performance analysis. However, enterprise workloads—such as virtual machines, databases, and large-file applications—operate at much larger block sizes. By testing at 64K blocks, vendors can provide results that align more closely with how storage systems are used in production environments.
Testing with 4K block sizes generates impressive-looking performance numbers, making everyone feel good, but these figures rarely translate to real-world scenarios. Instead, they create a false sense of confidence that can lead to unexpected performance shortfalls when the storage system is deployed in actual environments.
Real-World Applications Favor Larger Blocks
- Virtualized Environments: Hypervisors and virtual machines often read and write in 64K blocks or larger. These large block sizes are a concern for HCI environments because of the amount of network traffic required to read and write data, making it critical that these solutions leverage optimized network protocols and avoid the inefficiencies of NFS and iSCSI.
- Databases: Modern database platforms, like Microsoft SQL Server and Oracle, use larger block sizes to improve throughput. Using bigger blocks means more space for key storage in the branch nodes of B*-tree indexes, which reduces index height and improves the performance of indexed queries. This is especially true when running databases as virtual machines within a hypervisor, where storage efficiency and query performance are critical to workload responsiveness.
- Large-File Workloads: Media rendering, video editing, and backup software rely on larger block sizes for optimal performance. IT professionals rule out virtualizing large-file workloads because of the high storage bandwidth and compute requirements, plus the added complexity of GPU virtualization. Optimized and efficient storage services can make virtualization for these workloads more feasible by improving throughput and reducing latency.
Benchmarks using 64K blocks not only provide insights into raw performance but demonstrate how a storage system will handle workloads that reflect actual business needs.
Testing with Realistic Configurations
To ensure benchmarks are meaningful, they must use configurations that match what customers deploy in production. Unfortunately, many vendors still rely on unrealistic test environments that inflate results but provide little practical value to IT professionals.
Avoid Unrealistic Test Servers
Vendors use the latest high-performance servers—hardware to inflate test results. This hardware is either too expensive or unavailable in today’s constrained supply chains. Real-world benchmarks should use mainstream server configurations, reflecting the systems that most organizations can afford and deploy. For example, a balanced server configuration with consumer-grade processors, standard RAM capacities, and commonly available NVMe drives offers a much clearer picture of a storage solution’s capabilities.
Realistic Internetworking Matters
Networking is another critical factor. Most organizations today rely on either 25Gb or 100Gb Ethernet connections for storage networking. Tests should explicitly state which network speeds were used and why. For example, a test conducted with 100Gb Ethernet should explain how the added bandwidth contributes to the overall performance and why it’s relevant for certain use cases.
Activate Data Protection and Efficiency Features
Any meaningful storage performance test should enable all enterprise-grade features that customers use in production. Benchmarks that disable these features may achieve higher raw numbers, but they fail to show how the system will perform in the real world.
- Data Protection: Snapshots and drive redundancy features (e.g., synchronous mirroring, RAID, or erasure coding) must remain active during testing. These features are critical for ensuring data integrity in production and often introduce processing overhead.
- Data Efficiency: Deduplication features should also be enabled. These technologies are essential for maximizing storage capacity and reducing costs, and their impact on performance should be reflected in test results.
In short, capabilities that will be used under normal production scenarios should be on during benchmarking. Benchmarks that omit these features mislead customers about the system’s real-world performance capabilities.
The Value of Transparent Benchmarks
Many vendors have recently stopped publishing performance benchmarks, arguing that they don’t reflect real-world usage. While it’s true that raw benchmark numbers alone don’t tell the whole story, the absence of benchmarks leaves users without a reliable point of comparison.
Why Benchmarks Still Matter
- A Starting Point: Benchmarks provide a baseline to compare storage systems. Without them, IT professionals have no quantitative way to evaluate competing solutions.
- Transparency Matters: Vendors that include detailed explanations of their testing methodology (e.g., block sizes, configurations, and enabled features) empower customers to make informed decisions.
- Accountability: By publishing benchmark results, vendors hold themselves accountable to deliver on their performance claims.
Benchmarks remain valid as long as vendors provide the necessary details to contextualize their results. Users can then evaluate those results against their own specific requirements.
Conclusion
Choosing the best block size for storage performance testing—and ensuring that tests reflect real-world scenarios—is essential for making informed infrastructure decisions. Testing with 64K block sizes better aligns with the demands of modern workloads, providing results that IT professionals can trust. Additionally, benchmarks should use mainstream server configurations, activate data protection and efficiency features, and operate over practical networking environments such as 25Gb or 100Gb Ethernet.
While some vendors have moved away from publishing benchmarks, VergeIO believes in the importance of transparency and accountability. As discussed in VergeIO’s blog “A High Performance vSAN”, our testing not only uses realistic configurations but also provides the details necessary for users to understand how our solutions perform in real-world conditions.
By focusing on realistic testing and sharing the methodology behind our benchmarks, VergeOS empowers organizations to confidently evaluate and deploy a ultraconverged infrastructure (UCI) solution that meet their performance and budget requirements.

