Today’s applications have varying storage needs, including performance, protection, and long term retention. In a perfect world, you would store data on the storage system that best meets each of these needs. But the real world delivers a lot of complications.
Consider an OLTP database. Its performance and protection needs are both high, and long term retention needs are typically pretty low. What if the storage that best meets its performance needs is a cutting-edge flash storage system that does not have built-in data protection features, such as snapshots and replication? Would you sacrifice performance in order to guarantee proper protection? Probably not. The typical response would be to put it on the best performing storage you can, and then use some kind of “bolt-on” approach for data protection.
What if the database in the previous example has long term retention needs? The data would probably export out of the database and put in some other format, such as XML. The long term storage need of this data is high, but the performance requirement is relatively low. And that data will need protection, of course. But its value is such that we would want it to be in some type of storage system that protects itself (e.g. replication with version control to multiple destinations), and not take up space in your expensive backup system.
These two examples are relatively straightforward. What about the server virtualization platform? Each VM might have different performance and protection needs, which would mean that each VM might need a different kind of storage. Some of the VMs may need very high performance storage and have very high protection needs as well. Other VMs, such as those used in performance testing, have high performance needs but probably don’t need high-end data protection – if they need backups at all. There may also be VMs that have very high data protection needs but require relatively low performance. In addition, the needs of each VM could change over time, resulting in VMs that are wasting resources and other VMs that are starving for them. There may be VMs that are not getting proper protection, or others that don’t have enough performance to get the job done. Identifying the mismatched VMs is tough, and doing anything about it is even tougher.
A better solution would be a system that could dynamically allocate storage resources based on the data needs of each application. This is typically done with a storage virtualization product that requires routing all storage requests through its controllers. While these systems can accomplish this goal, the systems can also increase storage latency. A new approach is a system that places initial storage requests out of band, but the actual data transfer in-band. This gives the benefit of dynamic data movement without adding latency.
Mismatched storage resources can be the bane of IT’s existence. It wastes money and – just as importantly – it hurts the ability of IT to do its job, which is to make sure that each application has what it needs to do its job. The idea of dynamically allocating storage resources based on the need of each application promises to solve both problems and it is worthy of examination.