What is a Storage Controller?

Storage arrays all have some form of a processor embedded into a controller. As a result, a storage array’s controller is essentially a server that’s responsible for performing a range of functions for the storage system. Think of it as a storage computer. This “computer” can be configured to run by itself (single controller), in a redundant pair (dual controller) or even as a node within a cluster of servers (scale out storage). Each controller has an I/O path to communicate to the storage network or the directly attached servers, an I/O path that communicates to the attached storage devices or shelves of devices and a processor that handles the movement of data as well as other data-related functions, such as RAID and volume management.

In the modern data center the performance of the storage array can be directly affected (and in many cases determined) by the speed and capabilities of the storage controller. The controller’s processing capabilities are increasingly important. There are two reasons for this. The first is the high-speed storage infrastructure. The network can now easily send data at 10 Gigabits per second (using 10 GbE) or Gen 5 (16 Gbps)  Fibre Channel. That means the controller needs to be able to process and perform actions on this inbound data at even higher speeds and generating RAID parity.

Also the storage system may have many disk drives attached to it and the storage controller has to be able to communicate with each of these. The more drives, the more performance the storage controller has to maintain. Thanks to Solid State Drives (SSD) a very small number of drives may be able to generate more I/O than the controller can support. The controller used to have time between drive I/Os to perform certain functions. With high quantities of drives or high performance SSDs, that time or latency is almost gone.

The second reason that the performance capabilities of the storage controller is important is that the processor on the controller is responsible for an increasing number of complex functions. Besides basic functions, such as RAID and volume management, today’s storage controller has to handle tasks such as snapshots, clones, thin provisioning, auto-tiering and replication.

Snapshots, clones and thin provisioning are particularly burdensome since they dynamically allocate storage space as data is being written or changed on the storage system. This is a very processor-intensive task. The storage system commits to the connecting host that the capacity it expects has already been set aside, and then in the background the storage system works to keep those commitments.

Automated tiering moves data between types or classes of storage so that the most active data is on the fastest type of storage and the least active is on the most cost-effective class of storage. This moving of data back and forth is a lot of work for the controller. Automated tiering also requires that the controller analyze access patterns and other statistics to decide which data should be where. You don’t want to promote every accessed file, only files that have reached a certain level of consistent access. The combined functions represent a significant load on the storage processors.

There are, of course, future capabilities that will also need some of the processing power of the controller. An excellent example is deduplication, which is becoming an increasingly popular feature on primary storage. As we expect more from our storage systems the storage controller has an increasingly important role to play, and will have to get faster or be able to be clustered to keep up.

From a use perspective it’s important to pay attention to the capabilities of the controller. Does it have enough horsepower to meet your current and future needs? What are the options if you reach the performance limits of the processor? In order to support adequate performance as storage systems grow and add CPU-intensive features, companies need to budget for extra ‘headroom’ in their controllers. These systems need to either have plenty of extra headroom, have the ability to add controller processing power or companies should look at one of the scale out storage strategies like The Storage Hypervisor, Scale Out Storage or Grid Storage.

Success! You're on the list.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , , , ,
Posted in Article
3 comments on “What is a Storage Controller?
  1. […] of the top read articles on StorageSwiss is, “What is a Storage Controller?“. As we discussed in that article, a storage controller is the compute power in a storage […]

  2. […] of the top read articles on StorageSwiss is, “What is a Storage Controller?“. As we discussed in that article, a storage controller is the compute power in a storage system […]

  3. arun says:


Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,542 other subscribers
Blog Stats
%d bloggers like this: