Modern workloads such as Hadoop, Kafka and machine learning are demanding in terms of the volume of data that must be processed, the speed at which that data much be processed, and the fact that their capacity and performance requirements are both variable and unpredictable. They require a blend of performance, adaptable capacity, flexibility in configuration and cost-efficiency obtained through resource utilization that legacy direct attached and shared storage architectures simply cannot deliver.
As Storage Switzerland previously blogged, the intersection of non-volatile memory express (NVMe) over Fabrics (NVMe-oF) and a composable architecture stands to drive down costs and increase flexibility while at the same time providing levels of performance that meet the needs of the enterprise’s most demanding workloads. Composable architectures that support a variety of network options such as iSCSI, NVMe over TCP and NVMe over RDMA provide flexibility for users to optimize the network design for cost, performance or latency. In this installment, we will assess the DriveScale Composable Platform from this vantage point.
Composable infrastructures abstract the resources of physical systems into common pools, from which virtual systems, tailored to workload-specific requirements, may be spun up and down, on the fly. Composable infrastructure facilitates greater agility in terms of spinning up application-specific systems. Because resources are pooled, it also stands to improve compute, storage memory, and storage capacity utilization alike, setting the stage to cost effectively accelerate performance. The problem is that many composable infrastructure solutions are limited to a specific chassis or proprietary switch – they cannot scale seamlessly across racks.
DriveScale Composable Platform
DriveScale enables rack-scale and data center-scale composability through its Composable Platform. DriveScale’s software connects industry-standard compute and storage (including SAS, NVMe, SSD and HDD) resources over an Ethernet network fabric. It then uses disaggregated diskless compute nodes and simple storage resources that are composed in real time into virtual systems according to application or workload requirements. The storage systems are Ethernet-attached JBODs or JBOFs (Just a Bunch of Flash), which DriveScale calls “EBODs”. Based on user policy and design for a cluster, DriveScale automates the process of mounting drives to compute nodes over the network using iSCSI, NVMe over TCP or NVMe over RoCEv2 at scale. The solution pulls resources from across the network, but they appear as local to the application and provide performance equivalent to that of local drives. According to DriveScale, its API-driven centralized management and orchestration platform can centrally manage ten thousand nodes and 100,000 drives as one or more clusters.
The core value proposition of the DriveScale Composable Platform includes:
- Faster and more flexible allocation and reallocation of system resources. The storage managers are not spending their time writing code or dealing with LUN addresses. Applications and workloads can get online faster, and resources are not left sitting idle.
- Workloads can share the same infrastructure, and if one workload is overprovisioned, those unused resources can be released back to the resource pool to be used by another workload.
- Failed hardware resources can be swapped out through software without an impact to production workloads or having to physically replace the resource.
From a data protection standpoint, requirements are dictated by the application stack, and DriveScale’s platform handles communicating with the host operating system to facilitate the required underlying infrastructure topology. For example, a Hadoop cluster creates three copies of data for redundancy, and DriveScale’s platform knows the availability zones and how to place those copies across systems or racks to maintain data availability.
The fragmented nature of most storage environments today reflects the need for an application-led approach to storage infrastructure. Underutilized resources in siloed infrastructure in clusters is exacerbated by bare metal and scale-out workloads such as big data analytics, machine learning and cloud-native applications that are increasingly running modern businesses. DriveScale will appeal to organizations looking to serve these workloads with the performance levels of local storage and the seamless scale of shared-nothing architectures, but at the same time with greater efficiency and agility.
Performance and data-intensive workloads require the performance levels of direct-attached storage, but with more adaptable infrastructure the better utilizes resources. The performance and capacity potential of NVMe is driving IT to consider new storage, but the technology requires some help in the form of a new storage architecture in order to deliver on this potential. Composability and NVMe together provide the performance, elasticity and resource utilization demanded by these modern workloads. Carefully consider your deployment options. For example, RDMA adds expense but provides the lowest latency solution. NVMe over TCP, on the other hand, provides a more cost-effective and standardized solution with very good performance. A composable infrastructure that is flexible in terms of the underlying networking that it supports may be the answer for many IT shops looking to support these workloads with a balance of fast performance and cost efficiency.
To learn more about the value of composable infrastructure in the context of a modern workload ecosystem, watch our on demand webinar “Composing Infrastructure for Elastic, Hadoop, Kafka and Splunk”. Register and receive a copy of Storage Switzerland’s eBook “Is NVMe-oF Enough to Fix the Hyperscale Problem?”
Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.
[…] Read the full article here. […]