Getting Data Off a Filer and Living to Tell About It

Network Attached Storage (NAS) systems are primary storage systems, which are ideal for providing very fast access to “hot” (active) data sets that change frequently. Organizations have continuously added capacity to these systems to keep pace with the ever-rising tide of unstructured data, which compliance requirements as well as user demand, may force them to store for an indefinite time.

Unfortunately, a high percentage of the data stored on these expensive high performance systems grows cold and is no longer accessed but continues to consume valuable primary storage space needed for new active data sets. What is needed is a way for the organization to determine which data is inactive and migrate it to less expensive storage tiers.

In order to accomplish this tiering, IT professionals need to identify then select the data sets eligible for migration, then physically move those data sets to the appropriate storage tiers and establish a process to locate and recall any of those data sets if they are needed again. Doing all of this manually is labor intensive, time consuming and expensive. Organizations need a way to handle these tasks automatically, based on customer-defined data policies.

HSM Revisited

A number of years ago, vendors tried to address the problems with a solution known as Hierarchical Storage Management (HSM). However, there were several drawbacks to this system, which made it unacceptable to most organizations. Basically, HSM did a time consuming scan of the file system(s) then moved the older data to a tape library and left a stub file in the original location. Later in the Intelligent Data Lifecycle Management (ILM) phase, tape was replaced with cheap NAS systems. While a cheap NAS speeds up recalls, the problem is that in both cases file systems had to be scanned. Worse, the migration software essentially performed an OS hack to detect file recalls so that data could be transparently returned to the user. These hacks proved to be unreliable, often breaking with each operating system update.

Object Storage as a Target

In order to handle ever-increasing amounts of data while reigning in storage sprawl and reducing costs, organizations began turning to object storage systems, both cloud based and on premise, which could handle almost unlimited amounts of data and still contain costs with economies of scale that utilized standard commodity hardware instead of expensive proprietary systems. Object storage also provides advanced data protection methods to help ensure data integrity and stability while avoiding the limitations of RAID. But, organizations were still faced with the challenge of automating the process of identifying cold data on all their NAS systems, then migrating it to the object store.

An Efficient and Effective Solution

What IT professionals need is a comprehensive solution that addresses the challenges of automating the identification and migration of cold data from NAS systems to object storage. These solutions should take advantage of now available operating and NAS system APIs that allow clean access to perform migrations and recalls. These solutions should automatically migrate selected data sets to an object storage system, based on user-defined policies. The data migration as well as recall should be completely transparent to both NAS admins and end users.

StorageSwiss Take

A comprehensive solution that can handle almost unlimited amounts of data and automate the task of intelligently migrating cold data from expensive NAS systems to object storage systems can enable an organization to contain and reduce data center costs by reigning in storage sprawl. It also allows the organization to benefit from the advanced data protection mechanisms in object storage to further safeguard their data.

One such solution is Caringo FileFly. It supports migration from NetApp and Window Storage Server to their object storage software in a way that is transparent to users and applications. We did a deeper dive on FileFly in one of our latest webinars, ‘What Does Your Next NetApp Refresh Look Like’.

Joseph is a Lead Analyst with DSMCS, Inc. and an IT veteran with over 35 years of experience in the high tech industries. He has held senior technical positions with several major OEMs, VARs, and System Integrators, providing them with technical pre and post- sales support for a wide variety of data protection solutions. He also provided numerous technical analyst articles for Storage Switzerland as well as acting as their chief editor for all technical content up to the time Storage Switzerland closed upon their acquisition by StorONE. In the past, he also designed, implemented and supported backup, recovery and encryption solutions in addition to providing Disaster Recovery planning, testing and data loss risk assessments in distributed computing environments on UNIX and Windows platforms for various OEM's, VARs and System Integrators.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,234 other followers

Blog Stats
%d bloggers like this: