For years, the storage industry has hailed the advent of the all-flash data center. With growing pressure from lines of business to obtain more sophisticated and real-time analytics for competitive advantage, with declining price points and increasing density of solid-state drives, and with vendor investments in developing all-flash arrays and value-add storage software and services, there certainly is some truth driving this buzz. All-flash arrays have become more attainable for a wider range of customers and simultaneously more appropriate for a broader range of mission-critical, production workloads.
However, an all-flash approach is not yet appropriate for all workloads and customer environments. The price-performance tradeoff that drives storage architecture decisions is especially paramount when it comes to weighing an all-flash array purchase. These decisions are far from clear cut, with all vendors touting superior performance. When considering purchasing an all-flash array, buyers should consider the following:
- More IOPS isn’t necessarily better. The performance capabilities of the array should match the performance needs of the environment.
- Storage has historically been a workload performance bottleneck in the data center, making the promise of obtaining millions of IOPS by switching to an all-flash approach appealing at the outset. In reality, though, the vast majority of data centers do not need more than 300,000 IOPS for peak performance – and anything above that is likely to go unused. In fact, the most popular Storage Switzerland article for more than three years has remained this piece titled “What are IOPS and should you care?”. Additionally, variables such as network and server performance substantially impact IOPS performance.
- Scale-out architectures are popular for production flash environments, but scale-up architectures are potentially more effective.
- Scale-out architectures add undesirable networking latency, as well as complexity around clustering and network and metadata management.
- The increasing density of solid-state drives makes scale-up approaches logical. It becomes easier to mix drives, and makes less sense to accept the two or three node minimum common in most scale-out approaches.
- Compression, deduplication, snapshots, and thin provisioning are table stakes features, but all are not created equal. In particular, deduplication should be closely scrutinized.
- Because most deduplication features are based on open source code, they might not be optimized for the environment in which they are going to run.
- Always-on deduplication doesn’t always make sense, and it is important to weigh the value of running deduplication against the performance latency that it will add. For example, encryption must often occur at the application level for compliance reasons – which negates the viability for data deduplication at the array level.
- There are a number of “next-generation” capabilities that should also be considered for enhanced value. Most notably:
- REST APIs, to enhance automation and orchestration – and thus cloud-like IT service delivery.
- Analysis of telemetry data to better understand the system’s health – thus maximizing performance, longevity and uptime.
- Built-in analytics for enhanced business decision-making.
- Finally (but far from least important), many warranty programs are expensive and outdated – and many systems force upgrades due to limited scalability. Look for transparent pricing, and warranty plans that will last for ten years.
StorageSwiss Take
The digital transformation requirements of modern businesses coupled with ongoing technological developments in areas such as SSD density increasingly tip the value scale in favor of an all-flash data center. However, an all-flash data center is not always appropriate, and vendor marketing hype around areas like IOPS make it challenging to conduct true cost/benefit analysis of solutions. As a first step, it is important to understand the true workload performance requirements, and to consider the full data center architecture when assessing IT’s ability to deliver on that performance. Data reduction software can augment value but only when utilized appropriately; consider a solution that enables capabilities to be turned off when appropriate to minimize latency. Management platforms, maintenance plans and system scalability furthermore go a long way in maximizing both return on investment and availability.
For more discussion and information, watch Storage Switzerland’s webinar in collaboration with X-IO Storage, “Five Things to Look for in Your Next All-Flash Array” on demand here.