Optimizing a Windows file server optimizes a number of related processes and systems. In a previous blog we discussed what it’s like to optimize a file server, which involves automatically identifying, moving, and referencing inactive files. Once the initial optimization is complete, and assuming the optimization process continues to function after the initial optimization, the only thing left on the file server are active files.
In most Windows file servers, the inactive files far outnumber the active files. This is a result of human nature and human capabilities. A human being can only work on so many files at a given time, so this applies to a team of people as well; the number of files is just larger for a team. As we covered in the last blog, however, human nature also prevents us from deleting files even after we no longer use them. These two facts together result in a significantly higher number of inactive files than active files. This means moving inactive files to an object storage system should result in significant space savings on the primary Windows file server. It is not out of the realm of possibility to imagine that an average organization would reduce storage usage by 90 percent.
Having 90 percent less data optimizes a number of processes, the first is backup. Backing up 10 TB takes a lot less resources and time than backing up 100 TB. When you combine this with the common understanding that for every gigabyte on primary storage there is 10 to 20 GB on backup storage, you have a very compelling argument. A 90 TB primary storage savings will result in a 900 TB to 1.8 PB savings in backup storage. That’s also 90 percent fewer bytes that must be transferred across the network and 90 percent fewer bytes that must be sent through the CPU and I/O channel of the file server. Backups (and more importantly restores) of the Windows file server will also be 10 times faster since they only need to restore the active files. Think about the resources wasted during a restore of a Windows file server when 90 percent of the files being restored aren’t even being actively used by anyone.
Any proactive maintenance that must be done on the file system of the Windows file server will also be faster, such as a file system consistency check. Since such checks are typically performed during a system reboot, reboots of the file server will also be faster, which will make system upgrades faster.
There will probably not be any initial financial savings, since the recovered capacity will most likely not be reclaimed for some other purpose. However, over time there will be savings as you increase the life of the server and make use of the freed storage, without needing to purchase additional capacity just to store inactive files.
Finally, you could also make a strong argument that the data that has been migrated to the object storage system is protected against ransomware. Even if a ransomware attack was able to traverse the references and cause the migrated files to be returned to primary storage so they could be encrypted, the original files that had been migrated to the object storage system would still be there and protected against the attack.
Optimizing Windows file servers simply makes sense. No one argues against the idea that the most of the data that is stored on today’s file servers is old, unused data. Moving that data out to secondary storage, but allowing it to continue to be referenced from primary storage, saves space, backup time and resources, file system maintenance time, and protects the data against ransomware. Who wouldn’t want all of that?
Sponsored by Caringo