Network Attached Storage (NAS) has served the enterprise data center for well over two decades. Originally designed for storing user home directories, these systems have evolved to handle a wide variety of data sets including databases and virtual machine images. But unstructured data is now more than just user productivity files and given its rampant growth may be more than what the traditional NAS system can handle. Is it time for enterprises to look at a new construct to meet the requirements caused by the explosion of unstructured data?
NAS Systems Were Designed for Home Directories Not Unstructured Data
If all a NAS system had to do was continue to store data created by user productivity applications the answer would be a resounding “no”. The problem is that the NAS use case has expanded well beyond simply storing user files. Unstructured data now comes from a variety of sources besides humans, machine-generated data that’s part of a concept called the “Internet of Things” (IoT). All of these IoT devices typically create and transmit unstructured data that needs to be stored and eventually processed.
Keeping up with the growth in unstructured data may be the single biggest challenge facing the data center over the next five years. Being able to cost effectively and reliably store as well as deliver all the data being created by users, sensors and machines may be too much for the old NAS system to handle.
Object Storage is Designed for Unstructured Data
As we discuss in our article, “Object Storage 101”, object storage was designed specifically for unstructured data and can handle that data on a massive scale. The reality is that most enterprises will not exceed the file count limit of an object storage system, although they might be able to exceed the limits of a NAS system.
But unlike the cloud provider use case, the number of files stored for the enterprise, is a secondary concern. Enterprises will want to look at object storage for its ability to drive down the cost of storage, safely leverage high capacity hard disks and feed other processes like analytics. As we discuss in our article “How Object Storage can improve Hadoop”, object storage is an ideal foundation for a “data lake” that can be filled from a variety of data sources and then used to feed an Hadoop Infrastructure.
Most object storage systems are built from commodity hardware and have data protection capabilities that allow the safe use of very high capacity hard disk drives. These systems can recover from drive failure very quickly, regardless of drive size. Finally, most object storage systems can provide access to the data that they store through a variety of methods, including legacy protocols like CIFS, NFS and iSCSI while supporting modern protocols like REST and Amazon compatible S3.
Enterprises should consider object storage solutions as they begin to refresh old NAS systems. While they may not ever hit the capacity or file count limitations of current generation NAS systems, object storage systems should appeal to enterprises because of their ability to create a single pool of storage to hold all unstructured data, while leveraging commodity hardware to keep cost down.