In 2010, there were almost a dozen major NAS vendors on the market. Today, the market has shrunk, ironically at a time when organizations need unstructured data storage solutions more than ever. The primary reasons for the shrinking of the NAS market is that current NAS solutions providers are too distracted by object storage and see public cloud storage providers as the enemy.
The NAS Problem with Object Storage
Object storage is a different way to store data. Instead of using the traditional folder-and-file hierarchy, object storage stores data in a flat address space. Object storage assigns each object a unique identifier, as it is stores and retrieves data via that unique identifier. Accessing data via unique identifiers, instead of a folder hierarchy, means that applications have to either be rewritten to support unique identifiers, or some sort of emulation of a traditional file system needs to occur. Getting organizations to redevelop applications specifically to support object storage is hard, and as a result, all object storage systems offer some sort of file system gateway, which ironically negates some of the advantages of an object store.
In some ways, you could say that NAS hasn’t gone away, instead it has changed to a NAS gateway with an object store backend. In Storage Switzerland’s experience, a gateway provides access to most object storage systems instead of a native object interface. They are used to store files, as targets for backup and archive, and data received from IoT devices, which typically look for a traditional NFS or SMB mount point.
Instead of directly fixing the problem, the traditional NAS market has ceded much of the unstructured data storage responsibilities to object storage. One of the reasons for this is that the larger storage vendors provide most NAS systems available today, and those same vendors also offer an object storage solution that is less expensive than the NAS system.
The NAS Problem with Public Cloud
The second challenge NAS vendors face is public cloud storage, which is also typically object-based. Most NAS vendors see public cloud storage as the enemy, when actually they should see it as an opportunity. Cloud support from most NAS vendors, if available at all, is typically rudimentary at best. NAS vendors often use the cloud as a replication target to store a disaster recovery copy of the data.
Instead, NAS vendors should look at the cloud as a great opportunity. They should make sure their software not only runs as a cloud instance but also can scale-out to take advantage of cloud compute and cloud storage when necessary. Vendors that take this approach offer their customers not only better performance and features than what is typically present in the cloud, but also a seamless hybrid cloud experience. Typically, with a cloud-version of their NAS software, a customer can run an application in the cloud unchanged and more importantly, pull that application back on-premises again when needed.
NAS vendors disappeared because they got distracted by their perceived competitors, object storage and public cloud storage, instead of making sure their operating systems kept up with the realities of a massively growing unstructured data environment. While the traditional NAS system can scale to hundreds of terabytes, and in some cases petabytes, that theory falls short when the reality is millions, if not billions, of files.
In this Lightboard Video, Qumulo’s VP of Product Ben Gitenstein joins me to discuss how Qumulo’s architecture design fixes NAS performance problems at scale and embraces public cloud storage.