Analyst Blog: Should Enterprises replace NAS with Object Storage?

Network Attached Storage (NAS) has served the enterprise data center for well over two decades. Originally designed for storing user home directories, these systems have evolved to handle a wide variety of data sets including databases and virtual machine images. But unstructured data is now more than just user productivity files and given its rampant growth may be more than what the traditional NAS system can handle. Is it time for enterprises to look at a new construct to meet the requirements caused by the explosion of unstructured data?

NAS Systems Were Designed for Home Directories Not Unstructured Data

If all a NAS system had to do was continue to store data created by user productivity applications the answer would be a resounding “no”. The problem is that the NAS use case has expanded well beyond simply storing user files. Unstructured data now comes from a variety of sources besides humans, machine-generated data that’s part of a concept called the “Internet of Things” (IoT). All of these IoT devices typically create and transmit unstructured data that needs to be stored and eventually processed.

Keeping up with the growth in unstructured data may be the single biggest challenge facing the data center over the next five years. Being able to cost effectively and reliably store as well as deliver all the data being created by users, sensors and machines may be too much for the old NAS system to handle.

Object Storage is Designed for Unstructured Data

As we discuss in our article, “Object Storage 101”, object storage was designed specifically for unstructured data and can handle that data on a massive scale. The reality is that most enterprises will not exceed the file count limit of an object storage system, although they might be able to exceed the limits of a NAS system.

But unlike the cloud provider use case, the number of files stored for the enterprise, is a secondary concern. Enterprises will want to look at object storage for its ability to drive down the cost of storage, safely leverage high capacity hard disks and feed other processes like analytics. As we discuss in our article “How Object Storage can improve Hadoop”, object storage is an ideal foundation for a “data lake” that can be filled from a variety of data sources and then used to feed an Hadoop Infrastructure.

Most object storage systems are built from commodity hardware and have data protection capabilities that allow the safe use of very high capacity hard disk drives. These systems can recover from drive failure very quickly, regardless of drive size. Finally, most object storage systems can provide access to the data that they store through a variety of methods, including legacy protocols like CIFS, NFS and iSCSI while supporting modern protocols like REST and Amazon compatible S3.

StorageSwiss Take

Enterprises should consider object storage solutions as they begin to refresh old NAS systems. While they may not ever hit the capacity or file count limitations of current generation NAS systems, object storage systems should appeal to enterprises because of their ability to create a single pool of storage to hold all unstructured data, while leveraging commodity hardware to keep cost down.

Click Here To Sign Up For Our Newsletter

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , ,
Posted in Blog
3 comments on “Analyst Blog: Should Enterprises replace NAS with Object Storage?
  1. Tim Wessels says:

    Well, Mr. Crump offers some perfectly sensible guidance on why NAS systems should be replaced with object based storage clusters. In a world of rapidly increasing unstructured data (10x to 50x compared to the growth in structured data), the cost of storing data reliably is a major consideration. NAS systems cannot compete on price and object based storage clusters can. When it is time for the next NAS refresh or replacement, serious thought should be given to replacing NAS with object based storage clusters.

  2. sam says:

    I like the idea of Object storage and the advantages such as massive scalability and lower cost than a traitional NAS array. But how does object storage handle home directories and departmental shares… Can Object arrays perform well when thousand of users are accessing the same system? Also, how does object storage file updates to a file in a departmental share? (Meaning object storage does not support locking and sharing mechanisms on a single file)

    • wcurtispreston says:

      The way I read the article, it seems that home directories are the one use case where NAS makes a lot of sense — for the active data from those users. However, the use cased for NAS have moved far beyond that, and it is for those use cases that object storage makes a lot of sense.

      It also makes a lot of sense for data in users’ home directories after its perceived value has dropped and all it is doing is taking up space.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,514 other subscribers
Blog Stats
%d bloggers like this: