Windows file servers remain one of the most popular ways for an organization to store and share data between employees. One of the reasons for this popularity is when a server reaches capacity or starts to suffer performance problems it’s relatively easy and inexpensive to just stand up another file server. At least until the operational impact of yet another file server is factored in – then this cheap alternative is not so cheap.
The Cost of Another File Server
In most cases it is the operational costs associated with Windows file server sprawl that forces organizations to consider an alternative strategy. Each additional server requires the identification and moving of some sub-set of data to it. Then all the mapping needs updating so that the right users can access the new server, but they might also need access to the old system. And while Windows distributed file system (DFS) can mitigate that problem, DFS in and of itself can cause management headaches.
Another cost factor is the impact on the backup process. You need to add another new server to the backup schedule and transfer data from it. The new server becomes another job to manage, monitor and correct if something goes wrong. Since most backup solutions are either licensed by number of servers or total capacity the new file server likely causes an increase in data protection costs.
Finally there is the hard costs of adding additional Windows file servers. While the licensing, physical server and hard disk capacity are much less expensive than they used to be, those costs do add up as the server count increases.
The NAS Option
These costs are the primary motivation behind organizations moving to a Network Attached Storage (NAS) system. The initial and in many cases primary use case for NAS systems is to consolidate file servers. While a single NAS can replace a dozen file servers it does so at an expense. The organization loses the familiarity of the Windows front end, tight integration with Windows security and in most cases a NAS is simply too much box to just serve the file serving function.
A New Approach
The problem is that neither “solution”, adding file servers or converting to a NAS, really solve the core problem. Most Windows file servers are added to address a capacity issue more often than a performance issue, so a better approach would be to add technology that addresses the fact that users will rarely ever access most of the data on their Windows file servers again.
Object storage systems are designed specifically for the task of providing cost effective, verified, long-term retention of data. These solutions are typically software based and leverage commodity off the shelf servers or nodes organized into a storage a cluster. Object storage systems can deliver capacity for pennies a gigabyte yet provide better data protection and data integrity than many primary storage systems.
The challenge has been how to identify and move data from the Windows file servers to the object storage, and more importantly move it back again if this “never to be accessed again” data actually is accessed. Object storage vendors are now integrating these capabilities into their systems, essentially providing a Windows client that identifies old data, moves it to the object store and also returns the data if it is ever accessed again.
StorageSwiss Take
Organizations don’t actually have a Windows File Server sprawl problem, they have a data growth problem that they attempt to mitigate by adding more servers. They are trying to attack the problem with the wrong weapon. What IT needs is a data management solution that moves inactive data to a less expensive storage tier designed for long term data retention and protection. Armed with this weapon, IT can slow and even eliminate Windows File Server sprawl and improve operations and data protection.