Even if archive storage was the same price as primary storage, you’d still want to archive. While most in IT look at archiving as a way to reduce the cost of primary storage there is a lot more to it than that. A quality archive process can reduce data protection architecture costs and protect it from cyber-attacks.
An archive process can store data more efficiently than primary storage, so even if the archive storage system and software is the same price as the primary storage system, the archive will store more real data per dollar. At the heart of the problem is that primary storage treats all data the same.
High Availability Leads to Inefficiency
Almost no one will argue that 80 percent of the data on primary storage has not even been opened 6 months. Yet that 80 percent is treated just as valuable as the 20 percent that is active. That means that data is on clustered, highly available storage. It is backed up every night and in many cases is replicated to a disaster recovery site consuming capacity there, even though in a disaster you definitely won’t use that 80%.
Copy Data Leads to Inefficiency
In addition to data redundancy for protection, there are also multiple copies of the exact same file on primary storage. When the data was active it was probably copied several times to feed processes like testing and development, reporting and analytics. And because the data center moves so fast, no one had time to go through the storage system and clean up the old copies of this data. The reality is you only need two copies (one on-prem and one at DR site) of a file that no one is accessing.
Another area of savings is in software licensing. Most software applications, especially data protection software, storage monitoring software and virus scanning software are sold based on the amount of data they work with, essentially capacity-based licensing. If 80 percent of your data can be moved to an archive where it no longer needs to be backed up, monitored or checked for viruses, your software licensing costs will drop dramatically.
But Archive Storage is Cheaper
The good news is archive storage is cheaper. In most cases secondary storage is typically 30 to 40 percent less expensive than primary storage. It also tends to scale much further which means it doesn’t suffer the costly hardware upgrade and migration process common with primary storage. An archive process will also minimize the frequency of primary storage upgrades, since it is bogged down managing old data.
What’s the Hold Up?
As far as IT projects go, archive has one of the most impressive return on investments (ROI) available. Why is it also one of the least implemented projects in IT? Most IT professionals are concerned about the time and costs involved in setting up the archive. Modern software and archive hardware are much easier to setup and configure than the horror stories that tainted the process in the past.
Most operating systems fully support data migration, and file stubbing for seamless data recall. Even the preparatory meetings should be easy. Once the archive hardware and software is installed, simply start by archiving data that is three years old, wait to see if anyone complains. If not, archive data that is two years old and so on. Eventually you’ll get to the point that all data over nine months is on the archive and never hear a complaint.
Archive projects typically pay for themselves in less than six months and then continue to deliver saving to the organizations for a very long time. The financial reason to commit to an archiving project couldn’t be more obvious. Now the archive software, hardware and their integration to the operating systems make implementation and operations significantly easier. In many cases, the archive process can migrate all the organizations inactive data from primary storage and the users won’t even know it happened. The time is now to jump on the archive bandwagon.