Briefing Note: Solving the Inflexibility of All-Flash Arrays

Coho Data Announces All-Flash Nodes

All-Flash Array’s (AFA) are a performance sledgehammer for IT Planners looking to address performance issues in their environments. These systems allow data centers to respond faster to the needs of the business and they allow them to reduce service delivery costs by creating more scalable, dense environments. However, all-flash has some drawbacks that make long term use of the technology more of a challenge. Coho Data’s new All-Flash 2000f promises to address these challenges, so the data center can settle on a single storage platform for years to come.

Scale-Up vs. Scale-Out…Again

When it comes to storage architectures the debate, between scale-up and scale-out solutions, has raged for quite some time. The truth is the architecture that best suites the needs of your organization will depend on its expectations of IT as well as the organization’s growth plans. In general, if the organization expects to standardize on a single storage system and have that storage system expand to meet the long term performance and capacity needs of the business then scale-out storage systems deserve strong consideration. But that scale-out storage system has to also be cost effective and meet the initial, short term needs of the business as well. This seems to be an area where Coho’s new All-Flash 2000f excels, it can start small and it overcomes many of the long-term challenges of many scale-out storage systems.

Double All-Flash Array

The 2000f is also one of the few AFAs to support dual flash. The 2000f uses PCIe flash to act as a high endurance, high performance shock absorber to the drive from factor flash. This allows them to use a high capacity solid state drive (SSD) with a lower re-write tolerance and lower cost. The result is a very competitive AFA price point without risk to data or performance.

Software Defined Networking Eliminates Controller Bottlenecks

One of the challenges that scale-out and scale-up storage systems struggle with is controller bottlenecks. An AFA fixes the backend storage performance problem by eliminating hard disk drives. It also eliminates the potential for inconsistent performance due to cache misses since there is just one high performance tier. But some scale-out storage systems and scale-up storage systems introduce a new bottleneck, the storage controller. The controller becomes an issue, even in some scale-out designs because all data has to route though a single node before being dispersed to the other nodes and this chokepoint can no longer hide behind slow performing hard drives. The SSDs are ready and waiting for data. As we discussed in our article “Software Defined Networking For Better Scale-out Storage” Coho leverages software defined networking to eliminate this bottleneck.

Mixed Drive and Node Sizes

Another challenge facing scale-out AFAs is the inability to mix node types and the inability to mix drive types within those nodes. To make matters worse some scale-out designs require that you purchase each node at 100% capacity. Coho addresses all three of these problems; first each node can start with as little as eight drives per node, then add more drives later. When the time comes to add those drives they can be different than the original set of drives. For example, the initial configuration could be with 1TB SSD, then later when additional capacity is needed 2TB or 4TB drives can be added. Finally the nodes themselves can be mixed, so as Coho introduces new generations of nodes existing customers can use the latest technology. They are not forced to buy an old node.

Add HDD

Eventually most data centers will want to add hard disk to their all-flash configuration. As great as flash is, in the typical data center, there is simply too much data that simply does not belong on flash storage. Coho allows the intermixing of hard disk based nodes with all-flash nodes all managed from a single name space. This is especially important for data centers looking to support a single storage system and have it be the primary system for a long time.

StorageSwiss Take

Scale-up storage promises data centers the ability to settle on a single storage system for a long time. The problem is that most of them start too large and provide limited flexibility as the system ages. Coho with its SDN driven scale-out design has a solution that can start small, expand in a flexible manner and not be impacted by a controller bottlenecks. The new 2000F may be the answer for data centers looking to simplify storage and eliminate performance problems for the long term.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , ,
Posted in Briefing Note
One comment on “Briefing Note: Solving the Inflexibility of All-Flash Arrays
  1. […] To read the complete article, CLICK HERE […]

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.6K other subscribers
Blog Stats
  • 1,937,768 views