The boom in data growth and usage makes maximizing the utilization of storage resources more important than ever before. The silos of storage infrastructure that have typically existed in enterprise data centers are becoming further fragmented, as storage managers deploy specific infrastructure to host specific workloads, in order to optimize capacity and performance. For storage managers, this fragmentation creates a management nightmare, especially when factoring in the variety of infrastructure resources that must be managed (traditional three-tier infrastructure stacks, hyperconverged solutions, public cloud services, etc.), as well as the new levels of agility that IT is pressured to deliver to the business.
To streamline storage management, some storage array vendors advocate consolidating onto a singular system. However, the cost and time associated with purchasing, deploying and migrating to a single system that is robust enough to meet the needs of all enterprise workloads is substantial – and in fact prohibitive for many enterprises. This approach furthermore may introduce a “noisy neighbor” problem, whereby one specific application or virtual machine monopolizes available resources, thus impacting other workloads’ service levels. Finally, today’s application ecosystem is dynamic, characterized by workloads being spun up and down frequently – which in time is likely to necessitate the deployment of “one off” systems, thus re-introducing fragmentation.
Challenges inherent in “big box” storage consolidation have given rise to software-defined storage (SDS) – which overlays a common, virtualized storage management framework across physically separated storage systems. A few pain points commonly arise with the SDS approach, including the fact that SDS is replacing the built-in management tools that the customer has typically already paid for, and that are typically richly optimized for the specific hardware that they are operating. Additionally, SDS platforms may become a performance bottleneck because they usually run on a server or virtual machine as opposed to on a dedicated storage infrastructure. The modern set of performance-hungry workloads such as high-velocity analytics and artificial intelligence (AI) can ill afford this overhead.
To streamline management, to maximize utilization and longevity of existing resources, and to retain infrastructure-level flexibility, storage managers should consider deploying a monitoring dashboard. Storage monitoring tools balance the flexibility to obtain deep insights into performance and other issues of specific devices or storage pools, while at the same time providing an aggregated view into these metrics as well as into alerts such as error notifications. As a result, they provide additional value in terms of enabling storage managers to be more proactive, in terms of identifying issues before they impact the production workload and also in terms of better planning for capacity and performance needs.
For further discussion on how to use storage monitoring to get the most out of your storage infrastructure, watch this on demand webinar with Storage Switzerland and SolarWinds.