When founders of both Data Domain and VMware are introduced by Diane Greene to create a storage company, things are bound to be interesting. Typically, product briefings from Storage Switzerland do not start with the pedigree of those developing the product, but it seemed appropriate in this case. Mainly because the first reaction to how they are approaching primary storage is one of incredulity and skepticism. “Well, that’s certainly a different way to do it,” is the first thing that comes to mind.
For a long time, we have been taught that some level of RAID or mirroring is necessary for data to survive component failure, and as long as there are computing cycles to do that work, the scale of the compute/drive ratio defines the speed envelope of the system. With more cores, more and more data management services (dedupe, snaps/clones, etc) were also part of the compute/drive ratio. System performance has a lot to do with how much compute there is. So midrange 2-controller arrays always have a small / medium / large controller available in addition to offering drive shelves.
Performance and capacity provisioning have always been intertwined. Traditional RAID and RAID-like systems always put the two functions in the same place. Whether we’re talking a storage array, a hyper-converged infrastructure (HCI) system, or simply server-side flash in a server, one always protected primary media with some level of RAID or RAID-like system with local compute as the constraint.
The founders of Datrium asked a question no one else had asked: what if we separate the need for performance from the need for persistence?
Once you let go of the preconceived notion of tying performance and persistence together, Datrium DVX and its new take on server-storage convergence is relatively easy to understand. DVX host software performs traditional array data services using local host CPU, which scales with each added host, but unlike HCI, the hosts are isolated from each other’s I/O to simplify tuning and troubleshooting in mixed-use private clouds.
After first being deduped and compressed, all data is written to three places: local VM host ‘instance’ flash from your server vendor and the internally-mirrored NVRAM found in a SSD, a persistent data server connected via a 10 Gb network. Writes are acknowledged once they have been mirrored in NVRAM. Writes are then coalesced and written with RAID6 parity by hosts to NetShelf low-cost drives for the long term.
The data in the host’s instance flash is what applications will read from. It is essentially an ultra-large local read cache of the data that is also resident on the NetShelf (up to 16TB raw /100TB effective per host). Because it’s local, flash reads don’t suffer from host queue lengths and neighbor noise the way SAN arrays would; latencies stay small as you add hosts.
The NetShelf performs the persistence and protection function. If a server-side SSD malfunctions, the NetShelf is optimized for host-flash uploads and will stream the data no longer in cache to remaining SSD capacity.
Removing the replica overhead from the VM host and placing it in a separate, optimized data server makes the VM host flash lower cost and even faster, since it is unencumbered by the write load of host-to-host copies. Datrium claims write latency is more predictable than typical server-side approaches because hosts don’t have to juggle the work from other hosts; generally, hosts don’t talk to each other at all in the Datrium approach. The company also claims that DVX is half the cost of a hybrid storage array because flash on the server is a fraction of the cost of flash in an array, and because customers can use underutilized CPU resources within their own “brownfield” server infrastructure versus buying expensive storage controllers. Finally, one of Datrium’s taglines is “the simplicity of HCI without the lock-in” in that they support blade servers, brownfield servers and do not restrict server configurations with a given cluster.
The way that Datrium has designed their product certainly causes one to do a double take, but it sounds like a very interesting idea. The idea that server-storage convergence will continue to evolve makes sense, and Datrium’s approach has real value. Couple that with the pedigree of the founders and the fact that they already have dozens of customers in less than two quarters, and you have a company that you should not ignore. They will need to expand native data protection features to be considered a modern enterprise solution, but they’ve already said that that is in the plans, and for now you can work around that with a data protection product.