Storage Architecture 3 – Composable Storage

The data center needs a new storage architecture. The first architecture, dedicated scale-up storage, provides high performance and efficiency but is operationally complex at scale. The second architecture, scale-out shared everything architectures provides operational simplicity at scale but is less efficient from a compute and storage resources perspective. The time has come for a third storage architecture, composable storage, which delivers the performance and efficiency of scale-up storage with the operational simplicity of scale-out storage.

What Wrong With The Status Quo?

Most data centers have a mixture of applications in the environment. Some are legacy silo’ed applications that require extremely high performance from a finite number of volumes. These applications typically must have very specific guarantees in terms of performance and availability. Scale-up storage systems are ideal for these workloads but as the environment around them grows and the workloads change, it becomes operationally challenging for them to keep up.

These data centers also have a new breed of applications, environments or datasets that are scaling rapidly and are more suited to the scale-out storage design. While performance is important to some of the applications within this group what’s more important is the ability to quickly and flexibly scale (in all four directions) to map to the requirements of unpredictable as-a-service application models. Specific consistent guarantees may not be required. Close enough is good enough, in many cases.

Another challenge in scale-out designs is while the compute and storage resources scale well, the inter-node communication often does not. The network that supports the scale-out design, often basic IP, becomes complex and may eventually bottleneck, adding appreciable latency to storage IO.

To deal with the dichotomy, many organizations have multiple storage systems, as many as five or six, with a mixture of architectures, both scale-up and scale-out. There is also a mixture of storage paradigms to solve each particular business challenge – Hyperconverged Infrastructure, Private or Hybrid IaaS, etc. These mixtures makes the storage environment very complex and also brittle.

What is Composable Storage?

Composable storage is the third storage architecture. It leverages the best of scale-up and scale-out. Like scale-up architectures a composable storage system can start with single node. That node can be fully utilized in terms of IO performance and capacity. But, unlike a scale-up design, an additional node can be added to the composable storage so more capacity or compute performance can be available to the environment without introducing another point of management. Composable storage also disaggregates storage compute and storage capacity, which allows for those resources to be dynamically assigned or released by applications using them.

Early iterations of this design were called scale-right architectures. While a vast improvement over scale-up and scale-out architectures, these scale-right designs once nodes were added became scale-out and as such inherited many of the negative properties of scale-out. In other words scale-right really was not a new architecture, merely a bridge between the two existing architectures.

Composable storage, instead being a bridge between scale-up and scale-out architectures is, in fact, an architecture its on right. As it makes the shift from scale-up to scale-out, it addresses the limitations of scale-out architectures. Namely in the ability to dedicate specific performance characteristics to specific applications and it overcomes the potential network bottleneck the internode communication creates as the environment scales.

To address the dedicated performance limitation, composable storage creates dynamically composable virtual private storage system within the storage cluster. This dedicated virtual storage array can be hard allocated specific performance attributes in terms of IOPS, bandwidth and capacity. The virtual storage array can then be used in conjunction with legacy applications were very specific performance requirements are needed.

To address the networking issue, composable storage systems also need to provide better networking. Not only does better networking enable scale, it also allows more complex functions like the virtual private storage array. The problem is advanced networking is expensive and often proprietary. NVMe over Fabrics may give composable storage system vendors a way to deliver advanced networking without being locked into a proprietary or niche networking standard. NMVe enables composable storage to deliver a 4-way scaling capability; scale-up, scale-out, scale-in (less capacity per node) and scale-down (less controllers per cluster).

NVMe is a new protocol designed specifically for the communication to memory-based storage devices. It is designed to communicate over a PCIe bus and significantly increase command count and IO queue depth. NVMe over Fabrics is the networking of that standard. It enables network performance that rivals a local connection.

Integrating NVMe over Fabrics into the composable storage architectures is a logical step. The nodes within the cluster now communicate performance and latency levels that are almost as low as if they were direct attached. The result is very efficient scale as well as the ability to scale further.

Software Defined Storage is Key

Data centers need the capabilities of composable storage now. They can’t wait for storage vendors to design custom hardware and modify their software, especially considering the hardware is available right now. Servers with next generation Intel processors, PCIe buses and full NVMe support are coming to market now. In parallel to the arrival of next generation servers are the arrival of a NVMe flash devices, that promises new lows in latency and new highs in IOPS. In conjunction with the arrival of processors and devices are NVMe over Fabric ready network cards.

If all the hardware components are available, the missing link is the storage software. Software defined storage vendors should be able to quickly adapt their software to the new reality of high performance, NVMe powered hardware and deliver solutions to data centers that greatly reduce the amount of storage systems.

StorageSwiss Take

The data center of the future should be able to reduce its storage system count to two systems total. One system, likely on-premises, will be an all-flash array based on architecture 3. The second system, potentially in the cloud, an object storage system designed to archive, preserve and retain in-active data.

Sponsored by Kaminario

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,861 other followers

Blog Stats
  • 1,232,843 views
%d bloggers like this: