When most IT professionals hear the phrase “storage is growing”, they typically roll their eyes and assume the vendor is talking about the unprecedented growth we’ve seen in data capacity requirements over the last decade or so. But storage is actually growing over a number of different vectors like performance, protocols, hardware types and locations. While there are solutions to help manage this growth they are typically only focused on one aspect of growth. Instead, IT needs a more holistic approach, a data fabric, that will manage these different vectors in concert with each other.
A data fabric “sews” together data management, data placement, performance optimization, and access management to enable storage resources to be automatically provisioned to requesting users or applications in a self-service manner. This means data can move between storage systems within a data center and/or to the cloud without changing user processes. It also means that it can have specific set of quality of service guarantees associated with it so the data responds to user requests in a consistent fashion. Finally, it means users and/or devices can write or read data from the fabric with the access protocol of their choice.
Most storage systems today focus on a single location, typically on-premises, and only support a single access method..i.e. typically file or block. Those that support the cloud can typically only do so in a unidirectional fashion, and leverage the cloud solely as a giant data dumping ground. A data fabric is a solution that is as adroit running in the cloud as it is running on premises. More importantly it can seamlessly move data between locations (in both directions), not only for backup and archive but also for cloud bursting or application migration.
Users and application owners are demanding a self-service experience in which they do not have to wait for IT to respond to their requests. IT needs to be prepared for this demand. Keep users waiting too long and they will seek alternatives like the public cloud. The demand for self-service means the data fabric also needs automation specified by policy-driven controls, enabling users to request storage resources to their specifications and the fabric provides it, configuring itself in the background. Complex, legacy storage solutions fail to offer the required flexibility and agility.
The goal of the data fabric is to eliminate the storage sprawl that organizations are inflicting on themselves in an attempt to address data sprawl. Data sprawl is the result of the increasing number of use cases for IT, ranging from multiple types of databases (relational & NOSQL), to unstructured data types (legacy NAS, high performance NAS and data analytics) as well as the core demands of virtualization (server, desktop and containers). Unfortunately, because of the lack of options, most organizations feel forced to buy a unique storage system for each of these use cases. As a result there has been unprecedented storage sprawl within data centers over the last few years.
A data fabric is designed to deliver the required performance, scalability, and able to leverage the cloud, can meet a wide range of these use cases and can eliminate storage sprawl.
Organizations are drowning under the weight of data. And the response to that deluge is generally to buy specific storage solutions for each use case which only adds to the overall data center cost and increases management complexity. A data fabric provides a single, automated solution that enables IT to deliver storage in a self-service fashion. The result of a data fabric should be a less expensive storage infrastructure that better meets the needs of the organization.
Sponsored by Elastifile