All-Flash Array Hardware AND Software Matters

HDS Flash Update Briefing Note

IT professionals typically purchase all-flash arrays (AFA) to solve storage performance problems in their database, virtual desktop, virtual server and HPC environments. They are the performance sledgehammer that makes I/O concerns a thing of the past. Performance is the first, second, and third priority. After implementing AFAs organizations often want to do more with the original purchase than just meeting their original objective. As the use case expands and adds more workloads the priorities shift and performance shares the priority list with enterprise features and cost efficiency. The problem is that most AFAs force IT planners to choose between feature-less high-performance systems or compromised feature-full systems. HDS’ Hitachi Flash Storage (HFS) Array promises to give organizations the best of both worlds.

AFA Hardware Matters

Many AFA vendors focus solely on software, using off-the-shelf hardware. While flash performance can hide poor hardware design, excellence in hardware design allows flash to reach its full potential. The potential of flash is also more than just performance, a properly designed AFA should also reduce data center footprint requirements versus. its HDD counterparts as well as being more power efficient.

AFA Software Matters

At the other end of the spectrum are AFA hardware vendors. They are ignoring the importance of software as the high-performance system moves from niche problem solver to primary production storage. Features like in-line deduplication, in-line data compression, thin provisioning and snapshots are table stakes for today’s AFAs. Often missing are Quality of Service (QoS) controls, remote replication, and data encryption. QoS is critical to maintaining high performance for critical applications as the number of workloads increase. Remote replication is obviously significant from a disaster recovery perspective but is curiously absent from many AFA offerings. Encryption may be the most critical since AFAs don’t always erase cells completely.

HDS Hitachi Flash Storage (HFS)

Hitachi’s new HFS system is a 2U system, featuring three configurations that support up to 60 1.6TB MLC SSDs for a total of 96TB of raw capacity. That high capacity in such a small space makes the HFS one of the densest systems on the market. HDS also claims the systems can produce read performance ranging from 700k to 1 Million IOPS. The system certainly is optimized to allow flash to reach its performance, density and power efficiency IOPS.

Again, most IT professionals, after their flash array has solved the primary performance problem, want to extend its usefulness to support other workloads. In this use case the system needs enterprise features, both in hardware and software. From a hardware perspective enterprises expect availability. The HFS has active-active controllers. It leverages all the storage compute power available to it when everything is operational, but allows either controller to stand in for the other if there is a failure.

From a software perspective, the HFS has not only the table stake features of in-line deduplication and compression, but it also has thin provisioning and snapshots. It also has advanced features like QoS, remote replication, and data encryption. While unnoticeable to most applications, the addition of these features can impact the overall performance potential of the array. The HFS can turn these features on and off for specific applications so that will not impact their performance with use.

StorageSwiss Take

Hitachi has a complete all-flash product offerings available to it. Unlike other vendors, Hitachi designed and built these systems instead of getting them by buying out other companies. The result is that they can more seamlessly integrate into the overall HDS strategy. Organizations can start with an HFS and then add HDS’ virtualization engine (VSP) to unify management. Alternatively, a data center could refresh directly to an AFA, and the system is feature rich and scalable enough to support their entire data center. Whether AFA is a point solution or part of an all-flash data center strategy, the HDS HFS deserves high consideration.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Briefing Note
One comment on “All-Flash Array Hardware AND Software Matters
  1. -t says:

    Array based replication to an equally expensive AFA in a disaster recovery site has always seemed cost prohibitive to me. There are so many SDS replication options out there to eliminate that requirement.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: