Many advanced application architectures use Direct Attached Storage (DAS) instead of centralized shared storage for two reasons. First, in most cases sever drives are less expensive than drives found in shared storage systems. Second, storage that is internally accessed has much lower latency than storage that has to go through a network for data.
The Direct Attach Myth
The pricing advantage that server SSDs enjoy is partly real and partly myth. Even though the components are often the same the markup on a drive in a shared storage system is typically higher, but the delta is narrowing. In addition, that drive in an all-flash storage array is substantially more efficient. The capacity is shared across multiple systems, the data protection strategy is parity based (applications that use DAS typically makes 2-3 copies on other servers), and the storage system more than likely provides storage efficiency capabilities like data deduplication and compression. The result is the “drive” actually ends up being less expensive than when it is placed in a server that is part of an application cluster.
The Direct Attached Reality
Latency is another problem altogether. NVMe drives enables CPUs to have faster access to the SSDs. And the protocol is better optimized for solid state storage devices. All-flash arrays also benefit from NVMe drives but they do have the latency problem of going across a network, which as of now is Fibre or IP based and uses legacy SCSI or NFS as the transport protocol.
There are also more parts to a shared storage architecture. Adapters to install in servers, network switches and network interfaces on the storage system. The legacy protocol plus the physical connections all add up to latency that some applications simply can’t afford.
Solving the Shared Storage Latency Problem
First, it is important to note that today’s SAS based all-flash array provides them all the performance they need and current latencies are not an issue for them. There are some unique environments and typically an application or two not the whole data center, where squeezing that last bit of latency out of the architecture will make a difference to applications and users.
The use of NVMe drives inside of storage systems, while not solving the broader latency problem, does reduce latency in one of the more problematic areas – the interconnect between the storage software, the CPU and the actual drive. This additional reduction in latency will meet the performance demand for many data centers.
There are others, though, were latency will remain a concern even with an NVMe all-flash array. The environments will want to look at NVMe Over Fabrics (NVMe-F). NVMe-F uses the same protocol that NVMe does, except it is designed to go across a network. Just as Fibre Channel and Ethernet can transport the SCSI protocol, they can also transport the NVMe protocol. Anytime there is a connection, latencies will arise. NVMe-F protocol latencies will be very similar to DAS. The result will be shared storage with the performance of DAS without having to put up with the shortcomings in terms of efficiency, data protection and scalability.
To learn more about NVMe and what its impact will be on the data center join Storage Switzerland and Tegile for our on demand webinar, “What’s Your Path to NVMe?”. In this webinar you’ll learn all about NVMe and we will provide a step-by-step strategy to get you there.