Is Storage Management Overload making IT less relevant?

IT administrator “overload” is becoming the new norm for many organizations. Flat or declining IT budgets combined with accelerated data growth and an increasing demand for new business application services is putting many IT organizations in a quandary: How to meet the needs of the business while maintaining existing staff levels and keeping a lid on costs.

So Many Silos, So Little Time

Across many organizations it is not uncommon for one or two IT administrators to manage multiple silos of infrastructure. These administrators often juggle the responsibilities of managing application systems, virtualized environments, networking resources, storage and data protection. The challenge is that these individuals are constantly busy. Between end-user requests, troubleshooting application issues and managing infrastructure resources, they often have little time to focus on ways that IT can enhance business revenue and profitability.

Storage Cycle Sucker

One area of opportunity for IT organizations to save on costs, both from a capital expenditure standpoint and ongoing management, is storage. According to various industry sources, data is doubling approximately every two years. The majority of this data growth is coming from unstructured data or data that doesn’t reside within a database. This consists of end user files, PDFs, audio and video files, JPEGs, machine sensor data, etc. And with this unrelenting data growth comes the need for additional primary storage capacity and backup storage resources which, of course, translates into increasingly higher storage expenditures year over year.

In addition to increased capital expenditures, the labor expended planning, integrating and refreshing storage environments can be a huge drain on the IT organization’s time. And of course, this information needs to be backed up and stored offsite for DR and archival purposes. Moreover, these upgrade activities need to be performed without impacting application availability or performance; only further adding stress and complexity to the process.

Profligate Provisioning

Another challenge is storage inefficiency. In many data centers, storage utilization typically hovers between 30-40%. One of the chief reasons for this inefficiency is that IT planners must predict how much storage capacity they expect to consume over a 12-36 month time frame and then buy most of this capacity up-front. This “thumb-in-the-wind” forecasting often results in storage that is needlessly over provisioned and wasted, resulting in a higher total cost of storage ownership.

Zero Configuration Storage

To drive down the fully burdened costs of storage, businesses need storage offerings that are effectively “plug-and-play” and require zero configuration. In addition, they need these solutions to allow for more fine-grained control of how storage resources are added to the environment. If the solution was based on a scale-out NAS architecture that allowed for single, discrete storage nodes to be added to a cluster of storage resources in a “just-in-time” manner, this could help eliminate most of the efficiency issues that occur when storage is over provisioned. Instead, storage capacity could be added to the scale-out system as needed without causing any disruption to the application environment. And if data was automatically rebalanced as additional storage capacity was added to the cluster, administrators wouldn’t have to reconfigure storage volumes or perform any additional configuration tasks. This could help free up valuable administrator time.

Commodity Cost Containment

Another way to reduce costs is to leverage highly dense, commodity disk drives. If the scale-out NAS system allowed storage managers to mix and match drive types and densities from any disk drive manufacturer, they could keep their cost per GB very low. For example, by utilizing 6 and/or 8 TB drives, businesses could store their data in a very small footprint. Fewer spindles require less rack space as well as less power and cooling. As importantly, the storage system would allow for individual disk drives or all the drives in a storage node to be replaced as less expensive, higher density drives came to market. And having the flexibility to use drives from any manufacturer means that businesses can benefit from the ongoing storage price wars between the major disk drive vendors and allow them to continue to lower their price per TB over time.

Disk Density Dilemma

One of the challenges, however, with deploying highly dense disk drives into a scale-out storage environment is the potential for an increased risk of storage system downtime. Before multi-TB drives came to market, average drive rebuild times could be measured in hours but now with the imminent introduction of 10TB disk drives, rebuild times may take days or weeks to complete. RAID-6 disk configurations can provide some protection from drive failures since they can withstand up to two simultaneous drive failures without incurring any data loss or downtime. However, as denser spindles with increasingly longer rebuild times are added to the environment, there is a greater risk for downtime.

Object Storage

This new reality requires a storage architecture that can leverage the economies of dense disk drive architectures, while still providing very high levels of resiliency and availability. One way of achieving this is by utilizing an object based storage architecture rather than a traditional RAID configuration. Since object storage disperses data “objects” across multiple drives in a storage node and then again across multiple storage nodes in a cluster, drive rebuilds can be performed in parallel across potentially dozens of disk drives simultaneously. This can result in much more rapid drive recovery times and since data is always dispersed across multiple disk spindles across the entire cluster, data availability remains very high. This means that object based storage can deliver on the cost efficiencies of highly dense, scale-out disk drive configurations, while still maintaining the very high levels of availability that businesses require.

Web Based Storage Control

Maintaining high availability also requires continuous monitoring of the storage environment. Since disk devices are the most likely components to fail, gathering metrics on the health of each individual drive can enable administrators to identify when drives are starting to fail and proactively replace them before they crash. And if this information is presented up through a cloud based monitoring platform, administrators can view their storage system from any web-connected device. This allows storage managers to stay connected, regardless of their location and without the inconvenience of having to maintain yet another windows server running Java and connect through a VPN to gain visibility to their storage resources.

It would also be useful if all the data collected from a call home system could be aggregated and compared against like systems from other customer units in the field. This data could potentially give storage administrators greater insights into the relative reliability and stability of the disk drive devices they have deployed compared to disk devices in other production environments. So for example, if a particular drive manufacturer’s disk device has a real world, higher mean-time-between-failure (MTBF) than a competing disk product, a storage planner may decide to adopt that manufacturers product with their next storage upgrade. This could potentially bolster system integrity and overall availability.

Conclusion

IT organizations can no longer act as infrastructure caretakers. They need to help their businesses leverage technology to capitalize on market opportunities to remain competitive. But if the bulk of their time is consumed with the daily care and feeding of core infrastructure, over time they may become less relevant to the business. One area of opportunity to simplify infrastructure management and to reduce ongoing costs is data storage.

Scale-out NAS storage systems, like those from Exablox, can deliver scalable storage performance which requires little to no storage management. As a storage architecture that can be deployed in discrete storage nodes, these scale-out systems provide a NAS front-end interface to applications, while leveraging highly efficient and highly resilient object storage to protect business data ‘under the covers’. Designed to be plug-and-play, these solutions can allow limited IT staffs to regain time for business enablement activities, rather than spending it on data management housekeeping tasks. Furthermore, by giving storage planners a more granular way to deploy storage just-in-time, these systems can help IT to spend less capital dollars up-front and significantly drive down storage costs over time.

System upkeep and ongoing maintenance are also critical elements that can sometimes be overlooked when it comes to factoring in the total cost of ownership of a solution. Technologies that incorporate cloud based monitoring tools which provide a real-time status on system health and which can identify and predict drive failures, can enable storage managers to proactively resolve hardware issues and take corrective action without waiting for an outside engineer to arrive onsite. This can help ensure uptime, reduce contracted maintenance costs and keep IT in control.

Sponsored by Exablox

Click Here To Sign Up For Our Newsletter

As a 22 year IT veteran, Colm has worked in a variety of capacities ranging from technical support of critical OLTP environments to consultative sales and marketing for system integrators and manufacturers. His focus in the enterprise storage, backup and disaster recovery solutions space extends from mainframe and distributed computing environments across a wide range of industries.

Tagged with: , , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,249 other followers

Blog Stats
  • 1,565,103 views
%d bloggers like this: