For quite some time now, organizations have struggled to keep up with explosive data growth that continues to grow exponentially each year. In addition to the never ending demand for faster application response times, most data now needs to be stored indefinitely due to the rise of big data and analytics, as well as various regulations. Consequently, organizations have frequently been forced to overprovision new storage purchases, both in terms of performance and capacity. This overprovisioning of storage resources leads to a complex collection of disparate storage silos increasing the true cost of the storage investment.
To “solve” the problem, storage vendors have all introduced a variety of NAS, SAN and object storage solutions built on proprietary hardware and software. But these solutions will only communicate with other storage systems made by the same vendor. The problem is no one vendor can solve all a data center’s storage problems, and then the data center is left with multiple solutions from multiple vendors, without an effective way to move data between systems. The result is vendor lock-in.
Organizations are looking to “software-defined” solutions to release them from the chains of vendor lock-in and provide them the flexibility to use the most appropriate storage system for each use case. When evaluating various software-defined solutions, a primary requirement should be a solution that provides true data mobility. This means that any data can be moved transparently and automatically as needed, across multiple tiers of storage, regardless of protocols, device types, or where the storage tiers may be located within the enterprise.
Unlike storage virtualization alone, data virtualization provides a more granular approach to moving data. It does this by separating metadata from the actual data while also using standard protocols to virtualize disparate storage tiers across a global dataspace. This effectively provides a logical abstraction of all physical storage within that single global dataspace, regardless of device type or protocol. The result is that IT planners can purchase the ideal storage hardware specific for their use case, without increasing complexity.
The advantage of an out-of-band solution is that it has little to no impact on performance. It would also provide simplified management with global, intelligent data mobility, greatly reduced overprovisioning, as well as increased agility through truly scalable performance and capacity, while avoiding vendor lock-in by using any commodity hardware from any vendor. Another advantage of data virtualization is that applications, for the first time, would be able to access data on more than one tier of storage simply by configuring the applications to point to a logical target rather than a physical one.
IT departments are under incredible pressure. They must meet the demands for more and more storage performance, while at the same time meet the demand of storing all data forever, and still keeping costs down. But to maintain a manageable work environment, IT professionals try, unsuccessfully, to settle on one vendor to meet most of their storage needs, willingly locking themselves into that vendor. Data virtualization allows IT innovators to break the single vendor cycle, delivering much more choice and flexibility in the enterprise, while greatly simplifying the storage environment.
Sponsored by Primary Data