The 3 Problems With Adding Capacity To Your NAS – And How To Solve Them

Disk based Network Attached Storage (NAS) systems have become the workhorse of the data center. These systems are the first responders to the rampant growth of unstructured data facing organizations of all sizes. As a result, these organizations need to add capacity to their NAS systems and even add additional NAS systems all together. Capacity expansion may seem like the easy way out, initially, but over time it can actually cost an organization far more than the upfront hard disk shelf cost and in so doing make IT less efficient.

Is Adding Capacity The Answer?

As mentioned above, the most common response to capacity demands is to “simply” add an additional shelf for storage. This approach appeals because most modern day NAS systems can add a drive shelf of storage without any disruption to data access. Also, most of these NAS systems can seamlessly add this new capacity to existing volumes.

Problem 1 – Cost

But the reality is that adding capacity to a NAS system can create as many problems as it seems to solve. The first problem that adding capacity to the system creates, somewhat obviously, a cost and time element to purchasing and implementing the additional capacity. On top of that, there is the cost to power each additional shelf when it is added to the system as well as the rack space that the additional shelf or shelves occupy. These costs continue to pile up over the life of that shelf and each additional shelf.

Problem 2 – Increased Data Protection Complexity

The second problem that adding capacity to the NAS system causes is that there will be more data that needs to be protected and backed-up. This causes strain on the backup servers, network and adds another layer of management complexity. Most data protection processes will continue to protect the old data and the new data with equal priority. One of the impacts of adding NAS capacity is that the data protection capacity needs to be expanded at the same pace as production NAS capacity if not more so since the data protection process will hold multiple copies of the information on the NAS.

Problem 3 – Capacity Limitations

The third problem with adding capacity to NAS is that they can only support so much physical capacity. In reality, most NAS systems never reach the maximum capacity that the data sheet implies. As more and more files are added to the system, overall performance begins to degrade. In fact, some organizations have a standard policy that dictates that no NAS can reach more than 50% of capacity. The only option at this point is to purchase an additional NAS system, which leads many organizations to have a farm of NAS systems, all using only 50% of their available storage space.

The Capacity Reality

The reality is that most of these expansions are totally unnecessary. A majority of data on most NAS systems has not been accessed in years; this data could safely be deleted or archived. In fact, some studies indicate that as much as 80% of data on a NAS has not been accessed in 90 days. After all, if 80% of the data could be safely removed from the NAS, not only would it eliminate the need for the next capacity upgrade; it may do so for the next several years. Removing this static data and placing it on a less expensive form of storage seems like an obvious cost savings. Despite this incredible potential for cost savings very few data centers implement a data archiving strategy.

The problem is that while there are plenty of tools on the market to identify this data there are limited tools to help storage planners move this data and set up some form of transparent link from the old location to the new location. The transparent link is seen as critical so that users can seamlessly get to their data even if it has been removed from the primary NAS.

The Capacity Exit Strategy

What NAS systems need is an exit strategy; a way to migrate data so that they don’t need additional capacity or at least don’t need it as frequently. It should be easy for NAS vendors to build in the capability to migrate data to a secondary device so that primary NAS storage can be upgraded less frequently. Not surprisingly, most NAS systems don’t have this capability since doing so would potentially eliminate, or at least reduce, future disk capacity sales.

At this point only one NAS vendor supports this integrated capability, Hitachi Data Systems with their HNAS product. It can send old data to a secondary NFS mount point. The problem is that this secondary mount point, because it is NFS based, typically means a high capacity NAS which while lower in cost creates many of the same problems as described above but on less expensive disk.

Tape – The Ultimate The NAS Target

Tape-based storage is the ideal mechanism to preserve this data. It is reliable, very cost effective and does not require any power unless being accessed. The problem tape has is it does not typically present itself as a NFS mount point, instead it must be written to via SCSI commands which no NAS system is able to do.

Solutions like Crossroads StrongBox bridge this gap. They provide a cost effective tier combining an integrated disk front end to a tape library, which then presents itself on the network via a NFS mount point. These solutions essentially abstract disks and tape and present an archive tier that can be interacted with like any other NAS in the environment. This allows solutions like the Hitachi HNAS or other data movers to write directly to the tape tier without alteration. The first problem, cost of adding capacity, is eliminated.

Once data lands on the archive tier, data is instantly and automatically copied to the tape library for redundant data protection, eliminating the need to back this data up any longer. This not only eliminates the data protection problem, it actually improves the data protection process.

With this type of solution in place, the capacity of the primary NAS has to be increased much less frequently and can even free capacity. This eliminates the third problem of NAS systems being capacity inefficient.

Waiting to Exit

HNAS and other hierarchical storage management (HSM) solutions make it easy for the users to archive data by using rule based data movers to enable automatic management of static data much more cost effectively in combination with disk/tape solutions.

But, most users are now sophisticated enough to manage multiple mount points. Operating systems now graphically present network shares. Since the archive tier solutions appear on the network like another share, storage planners are finding it relatively easy to train their users to look for a file in the “/archive” if it is not in their “/home” directory.

Conclusion

NAS system designers have made adding capacity relatively easy, there is no downtime and volumes automatically expand. The temptation placed on IT is to assume that this is the simplest way to go. The reality is that it is not. Adding capacity to a NAS adds costs, both up front and long term. It also creates new pressure on the already stressed backup process and NAS systems don’t use the additional capacity effectively which makes the cost problem even worse.

Solutions like Crossroads StrongBox that abstract disk and tape to present a virtualized, file storage system may be the ultimate solution to the capacity problem. These systems can grow infinitely while being very cost effective both short and long term, all the while reducing pressure on the backup process.

Crossroads is a client of Storage Switzerland

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , ,
Posted in Article
4 comments on “The 3 Problems With Adding Capacity To Your NAS – And How To Solve Them
  1. undertaker98 says:

    Hitachi is not a one NAS vendor with file tiering capability. EMC has Cloud Tiering Appliance for own NAS and can move archive files not only to storage it can move to amazon S3 too. NetApp have same capability for file tiering but not sure that it can work with cloud.

  2. George Crump says:

    The article does say “and other hierarchical storage management (HSM) “

  3. Fred Oh says:

    I’m with Hitachi, but we didn’t sponsor this report. So actually, if we talk about natively being able to tier and migrate files from Hitachi NAS to an external object store (like our Hitachi Content Platform) and/or the Amazon cloud via S3 at an enterprise level, then we are the only ones. We do not use any 3rd party technology to achieve this. The Policy Manager, data migration software and external volume links are all built into the HNAS platform. The HNAS is an enterprise-class NAS solution that can scale to 8-nodes and 32PBs of usable capacity. Hitachi also has cloud on-ramp appliances, we call Hitachi Data Ingestor, which can tier and migrate to our HCP as well. So depending on the customer’s needs and use case, we have solutions to satisfy them at different scales and price-points.

  4. […] storage capacity needs, especially since budgets are flat. As we discussed in a recent article, “The 3 Problems With Adding Capacity To Your NAS”, the standard operating procedure of just adding capacity to your primary disk/NAS or buying […]

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,182 other followers

Blog Stats
  • 1,513,090 views
%d bloggers like this: