All-Flash Arrays bring an unprecedented level of performance to applications in the data center. Most of this performance gain comes from the replacement of hard disk drives with flash as the media of choice. The gain in performance was largely the result of the reduction in latency. But latency was not eliminated, it just moved. Now other components of the storage architecture are under pressure to provide similar levels of performance reduction.
While some latency is introduced by the storage software as it provides more and more features, most latency comes from the interconnectivity of the various architecture components to the flash media. There is latency in the internal connections between the storage system’s CPU and the storage software. There is latency in scale-out architectures as they interconnect storage servers (nodes) into the storage cluster and there is latency in the connection to the physical hosts that are attaching to the storage system.
The latency caused by these various interconnections has led some environments to shift to a direct attached storage model only, where the application interfaces directly with internal storage to the server the application is running on. The problem is that these applications have to suffer with all the challenges that direct attached storage brings with it, like poor resource efficiency, limited high availability options and difficult data protection integration.
Solving The Latency Problem
At the heart of storage architecture latency is the common protocol it uses. SCSI. Introduced in 1986, SCSI was designed for a hard disk era and does deliver the amount of IO commands that a solid state drive can support. As a result, the SSD is actually waiting on the protocol. NVMe was created to solve that problem by drastically increasing IO queue depth and command count. A NVMe SSD provides significantly better performance than a SCSI SSD.
The next step will be for storage system vendors to use NVMe as an interconnect between storage servers so scale-out storage architectures can scale without incurring inter-node latency. Finally, NVMe-F will connect to physical servers to deliver shared storage latency that rivals that of internal storage.
The first step in the NVMe rollout will be for storage systems to use NVMe SSDs and improve the internal communication of the storage server. But NVMe is more than just an internal connection protocol. NVMe over Fabrics (NVMe-F) enables low latency networking outside of the storage server.
Innovations from a Low Latency Network
A low latency network should allow vendors to deliver storage systems that are far more innovative than what is on the market today. It should deliver composable storage, where controller resources and storage capacity are equally available to applications and can be re-assigned to applications as need or returned to a general pool when not. The result should be much higher gains in efficiency and the ability to guarantee performance to mission critical applications.
Introducing Kaminario K2.N
Kaminario is on its sixth generation of software defined all-flash arrays. The Kaminario K2 Gen6 is a high performance all-flash array built around Kaminario’s VisionOS storage operating environment which provides a data service framework and a scale-up and -out architecture.
- DataShrink, which provides deduplication, compression and zero detect.
- DataProtect which provides snapshot, replication and encryption.
- DataManage that provides the GUI, Command line interface and a RESTful API.
- DataConnect that provides connectivity with OpenStack, Docker, KVSS, VMware and UCS.
- Kaminario Clarity: a cloud based analytics engine that provides predictive intelligence and support for Kaminario customers.
The K2.N builds on the capabilities of the Gen6 array and will enable customers to take full advantage of NVMe architectures. First, the internal connectivity will be to PCIe-based NVMe drives. Second, the backend connectivity will be fully converged NVMe. This means connectivity between nodes in the scale-out storage cluster will be made via NVMe enabling even lower latencies compared to the already impressive K2 Gen6 which is based on Inifiniband.
With the NVMe connectivity Kaminario will also introduce composable storage via Kaminario Flex. The system will be able to create virtual private arrays out of the available storage resources. The administrator will be able to assign a specific set of controllers and capacity to an application or operating environment, assuring application specific performance. If the solution needs more performance, the administrator can assign additional storage controllers or capacity, all on the fly without disrupting the application. All K2 systems will be able to take advantage of this software orchestration layer.
The rest of the storage architecture like switches and host adaptors will likely convert to NVMe at a much slower pace. To support that, Kaminario will support an open front-end connectivity that can range from fibre channel, to iSCSI and eventually to NVMe. This capability allows the data center to enjoy the benefits of NVMe where it is need most (internal connectivity and internode connectivity) and convert the rest of the environment time and demand requires.
There are parts of the storage architecture that need NVMe right now. While most vendors agree that NVMe SSDs, internal connectivity, is a high priority most are ignoring a second high priority, internode connectivity. Kaminario seems to be the first to address that need. The value of using NVMe for internode connectivity is not just lower latent communications, but the fact that that low latency opens up areas for innovation like composable storage.
Sponsored by Kaminario