NAS vs. Object: Data Archive – Long Term Data Efficiency

Most of the time when unstructured data is created there is never a need to access it again. But it is still necessary to store it just in case you need it. When the “just in case” scenario occurs, you need to retrieve it quickly and it has to be readable. The problem is that “just-in-case” may occur at anytime, from a few days after someone creates the data, to a few decades later. The storage system has to be able to store this data efficiently for a long period of time. The need for long term data efficiency is one of the reasons why network attached storage systems are giving way to object storage systems.

What is Long Term Data Efficiency?

Long term data efficiency means that a storage system has the ability to store a large amount of data, both in terms of capacity and number of items, for decades. But just scaling to meet the long term capacity requirement is not enough. The storage system must also maintain data integrity so that when “just-in-case” occurs the data is there and is readable. Finally, long -term data efficiency means that the storage system can cost effectively store it from both a hard-cost perspective and an operational perspective.

NAS vs. Object – Which Meets The Long Term Data COST Efficiency Challenge?

Except for cost, network attached storage (NAS) systems meet many of the short term problems that unstructured data creates. When it comes to long term data efficiency most NAS systems fall short. Not only are most NAS systems expensive to purchase upfront, they are also expensive to scale or upgrade. While scale-out NAS systems have the ability to spread the cost of scaling more evenly across nodes, it is still expensive. And both legacy NAS approaches require that the system is powered the entire time it is online, which means the cost to power and cool are constant.

Object storage is typically software only. The software leverages off-the-shelf servers and internal SSD/HDD storage. The impact is dramatically lower costs and greater hardware flexibility. Although most object storage software solutions are scale-out in design, some have the ability to not only power down storage media, they can also power down nodes. This means that the long term power and cooling costs of an object storage system can rival offline media storage like tape.

NAS vs. Object – Which Meets The Long Term Data Integrity Challenge?

Caringo OD

Watch On Demand

When “just-in-case” occurs, the data needs to be accessible and readable. The problem is there may be decades of time between accesses of that data. The storage system needs to have the ability to continuously verify that data written today will be the same in 10 or 20 years. While some NAS systems can scan the media for defect, most can not verify specific files. Object storage systems can leverage the unique ID applied to each object. Since that ID is generated from the data, it should always provide the same result. An object storage system can periodically recalculate the ID for an object. If the result is different than the original the system can take corrective action or notify the storage administrator.


Long term data efficiency means making sure that over the course of decades an organization can cost-effectively store data, while at the same time assuring data quality. The software-first nature of object storage keeps upfront and ongoing costs low and the built in data integrity ensures that when you need a file in the future you can read it.

Sponsored By Caringo

About Caringo

Caringo was founded in 2005 to change the economics of storage by designing software from the ground up to solve the issues associated with data protection, management, organization and search at massive scale. Caringo’s flagship product, Swarm, eliminates the need to migrate data into disparate solutions for long-term preservation, delivery and analysis – radically reducing total cost of ownership. Today, Caringo software is the foundation for simple, bulletproof, limitless storage solutions for the Department of Defense, the Brazilian Federal Court System, City of Austin, Telefónica, British Telecom,, Johns Hopkins University and hundreds more worldwide. Visit to learn more.

George Crump is the Chief Marketing Officer of StorONE. Prior to StorONE, George spent almost 14 years as the founder and lead analyst at Storage Switzerland, which StorONE acquired in March of 2020. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Prior to founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , ,
Posted in Blog
One comment on “NAS vs. Object: Data Archive – Long Term Data Efficiency

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,787 other followers
Blog Stats
%d bloggers like this: