Three new considerations for Scale-Out and Scale-Up All-Flash Architectures

A couple of years ago Storage Switzerland wrote an article “Scale Out or Scale Up? – 6 Key Considerations for the Flash Array Buyer”. The points made in that article are still relevant, but vendors in both architecture camps have made progress resolving the various issues. In 2015 the conversation needs to move beyond simply architecture to a discussion about how these approaches maintain a consistent level of performance through failures and upgrades.

Revisiting the Five Keys

In our prior article the five key considerations were:

  1. How Small Can the System Start?
  2. Can the System Scale Performance and Capacity Independently?
  3. Is Scale-up Performance Really a Concern?
  4. Is Linear Performance a Reality?
  5. Is Scale-out Really Less Expensive?

While these are still important questions to ask a prospective all-flash vendor, many will have an answer ready. As a result, deeper questions are needed to get the details behind those answers. For example a few scale-out flash vendors can now start in a single node configuration. So the questions should shift to how the transition from single node to multiple nodes works and if there’s data loss or a long migration time involved.

The Flash Impact

Over the past two years we’ve seen a very consistent trend when it comes to flash adoption in a data center; once it starts it is almost impossible to stop. This is partly because of the obvious application performance increase when flash is implemented. Users and application owners get ‘hooked’ on the instantaneous response. There is another very common flash dividend as well, a massive reduction in storage management time. The time that the pre-flash data center spends tuning, sometimes coaxing, performance out of a hard disk or hybrid array is almost completely eliminated.

As the initial ‘shock and awe’ of the all-flash performance boost subsides, users and application owners quickly become accustomed to its attributes. The data center changes, virtual infrastructures (server and desktop) become more dense, and applications are expected to scale to support more users and data. Flash deployment extends to areas never considered for flash like analytics, NoSQL and even file sharing, while the considerations and expectations of flash become more critical.

The New Considerations for All-Flash

1 – Provide Consistent Performance

As the data center begins to count on all-flash based storage systems they move from being thankful that flash performance is available to them, to it being critical to sustain the environment. For example, a virtual server architecture that has maximized virtual machine density can’t afford a sudden drop in performance of 50% because of a component failure or storage system maintenance. With sub-millisecond latency, flash adopters seek consistent performance regardless of the problems facing the storage team.

The challenge is to ensure that same consistent performance even when AFA resources are down. One way to accomplish this if for the storage team to manually limit storage performance resources usage to 50%, doing so preserves resources to ensure service delivery during such events but few organizations actually do this. Instead the organization’s users and its customers have to suffer through bad performance when a resource becomes unavailable. Ideally the storage system should automatically reserve storage performance resources so that performance is consistent through an outage.

We’ve see some modern scale-up architectures address this need with the creation of a hybrid active-active / active-passive controller architecture. One that allows active/active front-end access for I/O networking and system memory coherency with back-end active-passive CPU utilization to ensure performance reserves for failures and maintenance.

Some vendors of active-active architectures claim that they would rather provide full access to all of the array hardware, but when a controller fails or needs to be upgraded, they loose 50% or more of their potential performance capabilities. When a system is purchased specifically to deliver consistent performance the sudden loss of 50% of that performance may not be acceptable, especially in highly dense virtual or highly scaled database environments.

As stated above IT professionals are forced to manually reserve storage resources in the event of a future failure. Because the resource reservation is manual, it has to be constantly adjusted as new workloads are migrated to the AFA, a time consuming process that has the potential for human error.

2 – Provide Limited Failure Domains

The second consideration is the scope of impact from a storage failure. While almost all storage architectures are highly available, these mechanisms still require organizations to calculate the resiliency of the architecture as it scales and the scope of a failure domain. A failure domain represents the number of users, servers or applications that could be impacted by a storage system outage. For example if the entire virtualized server infrastructure is stored on a single volume within an array. The failure domain is that volume and the virtual server environment. If for some reason that volume has an error or sustains multiple drive failures then the entire virtual server environment is down.

Components will fail, it is important to understand the risk factors that could lead to a loss of service or data until the array returns to a protected state. Data protection to ensure against dual SSD failures is considered by most as enterprise grade. While it is unlikely to incur simultaneous SSD failures, there is a significant probability that an unrecoverable media error may be encountered when attempting to rebuild or access data while there is a single SSD failure. In such an event understanding the scope of the potential data loss is critical. For scale-up and scale-out architectures that leverage all of the flash as a single storage pool the scope of such a failure may result in all data going offline.

With shared-nothing architectures seen in some scale-out all-flash arrays, concurrent faults span SSD failures and media errors to include storage nodes themselves. These ‘white box’ designs comprised of commodity servers tend to leverage forms of data mirroring to provide data protection. As such a node that is offline due to a fault or for maintenance should be viewed as a failure – even if temporary – to all of the SSDs attached to the node. As a result, the failure of a fan or power supply may bring down a node and place all of the data in the array at risk of loss until the node can be brought back online or data can be re-copied to another set of nodes.

Modern storage architectures can often provide PBs of effective storage capacity and present that capacity as a single large volume. IT planners should consider defining a small scope of a failure domain. An organization is better served by creating multiple storage pools comprised of 250TBs or 500TBs, each with its discrete level of protection. Every failure is painful, so limiting the impact of an outage is a more responsible business decision than attempting to scale storage capacity to the maximum of a platform.

3 – Provide Granular Expansion

A final consideration is the scope of expansion. For an all-flash system expansion can mean additional capacity or increased performance. How the scale-up or scale-out architecture addresses this expansion is important to the IT professional.

In terms of capacity, both architectures can generally accommodate “more”, but how will the system respond when “more” is added? In a scale-up architecture the newly added capacity comes in the form of a storage shelf that is attached to the storage controller. It is only acted upon as it is consumed, either by creating new volumes assigned to that capacity or by extending existing volumes. But in the scale-out environment nothing occurs until one of those two actions is triggered. Capacity expansion has no impact on performance until that capacity begins to be consumed.

In a scale-out architecture, expansion comes in the form of a new storage node, which provides both more compute power and additional storage capacity to the cluster. In most cases when a new node is added to the cluster the existing storage volumes are instantly rebalanced to take advantage of that node. Capacity is freed up on other nodes and consumed on the new node. This can lead to a high level of network traffic as each node is added until data can be rebalanced.

Understanding how the specific architectures will respond to a demand for more performance is also an important consideration. This difference in a flash architecture versus a hard disk architecture is that the physical media, the solid-state drive, is no longer the bottleneck, the storage controller is. Until new memory technology becomes available improving performance is going to mean either upgrading the controller or adding more controllers.

The scale-up all-flash array is limited to the first option, upgrading the controllers, essentially making sure that the new controller has greater processing power and more I/O bandwidth. There could be some improvement to the underlying storage software, making it more efficiently threaded for example, so that multi-core CPUs can be fully exploited. But many of the new generation of storage systems, especially all-flash arrays, already do this. As stated before, it is important that these upgrades can be done non-disruptively and not require that data be moved to a new system.

For scale-out systems performance can be increased by adding storage nodes. But, in almost every case this means also buying more flash capacity. In other words, one of the scope issues with some scale-out storage systems is the lack of granularity in the upgrade process. It is difficult to add just capacity, just CPU performance or just network I/O. Also, most scale-out storage systems are still limited to per node performance potential. For example, the performance limit on a single volume may be 100K IOPS but the manufacturer claims they can do 1 million IOPS by having ten 100K IOP workloads spread across ten nodes. But if the data center needs a single volume with more than 100K IOPS, then either all the storage nodes need to be upgraded to a faster, more powerful node or a separate cluster needs to be built.

3772.2 Pure Storage HomepageConclusion

The five original considerations for deciding between a scale-up and scale-out architecture are still relevant, but these three new considerations extend the conversation further. The scope of the architecture as it impacts the applications in the data center is increasingly important. These considerations also require understanding how the data center will change thanks to the implementation of an all-flash architecture. Once the data center begins to count on flash performance there is no turning back and the ability to deliver that performance in a consistent fashion is critical.

Sponsored by Pure Storage

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , ,
Posted in Article
2 comments on “Three new considerations for Scale-Out and Scale-Up All-Flash Architectures
  1. Chris McCall says:

    Consideration #1 is a perfect example of why even all flash arrays need Storage QoS.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,785 views