Is Object Storage Really the Future of Unstructured Data Storage?

Simply put, unstructured data is breaking traditional network-attached storage (NAS) architectures. The scale-up nature of traditional NAS solutions renders the storage controller a bottleneck in being able to handle the intensive metadata operations that are associated with unstructured files, forcing expensive system upgrades before capacity is fully utilized. Additionally, capacity cannot be added or redistributed to meet exponentially growing file counts without the entire system being taken down.

Object storage is often hailed as the solution to efficiently handling vast quantities of files. Indeed, object storage architectures are better equipped than legacy NAS to meet capacity requirements at scale. The challenge is that object storage architectures were not designed to deliver the levels of input/output operations per second (IOPS) required by the modern workloads that are consuming these large volumes of unstructured data – in large part due to the fact that they are metadata-intensive. Furthermore, object storage architectures are not natively compliant with the Portable Operating System Interface (POSIX), which is a consortium of standards that enables application mobility by facilitating compatibility between operating systems. By and large, the majority of enterprise production applications are not native to the S3 object storage access protocol, and as a result will require either a rewrite or a gateway for compatibility with the object storage infrastructure – thus adding penalties in the form of slower performance and greater complexity.

Additionally, both traditional NAS and object storage were not written for seamless integration with cloud services. That is, they cannot enable files to be seamlessly exchanged between on and off-premises. Considering that one of the leading values of the cloud is the ability to access elastic compute resources on demand, it is important that the application can access data natively from wherever that data may be stored.

In sum, the primary pain point when it comes to serving unstructured data is not hardware. With new non-volatile memory express (NVMe) storage solutions, hardware is by and large more than capable of delivering required levels of performance. The challenge lies in the file system itself, which has become a barrier to the application fully taking advantage of those levels of performance. Storage software protocols such as metadata operations and caching substantially impact the ability for read performance to come closer to par with write performance.

Watch Storage Switzerland’s on demand webinar with Qumulo, “NAS vs Object – Can NAS Make a Comeback?,” to learn how to architect an unstructured data strategy that eliminates the need for tradeoffs between capacity and the ability to serve vast unstructured data file counts.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Senior Analyst, Krista Macomber produces analyst commentary and contributes to a range of client deliverables including white papers, webinars and videos for Storage Switzerland. She has a decade of experience covering all things storage, data center and cloud infrastructure, including: technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her previous experience includes leading the IT infrastructure practice of analyst firm Technology Business Research, and leading market intelligence initiatives for media company TechTarget.

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,542 other subscribers
Blog Stats
  • 1,898,047 views
%d bloggers like this: