Software defined storage (SDS) promises to reduce storage capital and operational costs by abstracting data services from the storage hardware. To deliver on these promises, SDS typically enables the use of commodity storage which should lower storage acquisition costs. It also provides a common interface to data services regardless of the storage hardware used, lowering operational costs. While this may sound like a dream come true, data centers continue to purchase proprietary, turnkey, networked storage hardware. To reverse the trend SDS solutions should address three key problems that are keeping the data center from adopting them more broadly.
SDS Problem #1 – Leveraging Current Infrastructure
The data center is a busy place with IT personnel simultaneously pulled in many different directions. In addition there are embedded infrastructures and processes in place that are a trusted part of the IT professional’s day. Many SDS solutions expect the data center to “jump in with both feet” and replace their current storage hardware, infrastructure and processes with the SDS solutions own infrastructure and processes. They often expect the data center to abandon the shared storage hardware in favor of a commoditized hyper-converged architecture.
Many SDS solutions proclaim that they eliminate the need for the storage area network (SAN). These “no-SAN” solutions may be of interest to a greenfield data center or even a new project within a data center but not for the data center with a heavy investment in storage infrastructure. Considering that most data centers have a SAN of some sort, then the lack of SAN support actually becomes a roadblock to SDS adoption instead of an advantage.
To overcome this problem, SDS solutions need to enable a “crawl, walk, run” implementation strategy by leveraging current infrastructure, while adding unified data services, until more capacity is needed. Then the data center could augment the current infrastructure with commodity storage if it so chooses. SDS often asks the exact opposite by requiring that the data center switch to a hyper-converged architecture where storage is no longer on a dedicated storage system.
SDS Problem #2 – Incomplete Data Services
While most data centers do have a shared storage infrastructure, they also have multiple storage systems on that infrastructure. These systems are often bought for specific environments like virtual desktop infrastructure or database applications as well as a more general purpose storage system for the bulk of the data center’s needs. If these systems and their infrastructure can be leveraged, instead of replaced, then there would be incredible value in abstracting the storage intelligence and applying the same data services across all the storage systems in the environment. It would reduce training time and allow for better overall use of storage resources. But for true unification of services to be viable, the data services provided by the SDS solution would need to be at least as robust as what the storage systems themselves offer. The data center is not going to want to take a step backwards and lose features in order to have a unified data services interface.
The problem is that many SDS solutions lack completeness in their data services offering. A simple example is that many SDS solutions lack the ability to perform data replication to a remote location so that a disaster recovery copy of data can be created. Even rarer is the ability to migrate data into the SDS architecture itself. Data cannot be simply copied over, so some form of data migration facility from the old storage platform to the new should be provided. Otherwise the data will have to be restored from a backup copy.
The two features that all of these SDS solutions claim to provide is the ability to leverage commodity storage to lower the acquisition cost and to provide a single operational console for data services implementation. The lack of a complete and robust data service offering means that point storage systems will still be purchased to fill gaps that the new SDS solution can’t fill.
SDS Problem #3 – Lack of Cloud Economics
Finally, many SDS solutions claim that they provide the data center with a cloud like hyper-converged architecture, but few provide a cloud-like economics model. These solutions are often software only and most are licensed by capacity packs. Many will also charge an additional fee for advanced features.
This can be a problem because most data centers are never at the right capacity point to take advantage of pack pricing. For example a starter pack may provide storage service from 1-10TBs, and the next pack may support 10-25TB’s. What if the organization has 11TB’s of storage capacity? That means they need to upgrade all the way to the 25TB license for 1 more TB of support. Instead, SDS solutions should price on a very granular per TB subscription model that is verified once per year and all features should be included in the subscription price.
Another cost that goes undiscussed by most SDS solutions is the cost associated with the inability to support existing infrastructure and storage systems as described above. Many of these SDS solutions allow for the use of commodity drives installed in the same servers that provide the compute (hyper-converged). While this reduces future cost, it does require the purchase of flash solid state drives and hard disk drives to be installed in the existing server architecture.
If the SDS solution can leverage existing storage infrastructure and storage systems, the data center would be able to gain the operational efficiencies of abstracted storage intelligence without having to purchase any additional storage hardware. Combined with a licensing model that makes upfront costs more cloud like, the economics roadblock to adoption is almost completely removed.
To increase adoption rate, SDS solutions actually need to be less aggressive in their approach to the data center. Instead of forcing a move to commodity storage, they should improve the usefulness of the existing storage infrastructure and storage hardware. They also should provide a full array of data services that more than match what is available from the typical storage system. This would allow the SDS solution to be integrated gradually, initially for migration, and then for data protection and disaster recovery. After the solution has proven itself, IT would feel confident using it to provide common data services for all of their storage systems.
Finally, the economics have to make sense. The solution should be able to leverage existing storage and storage infrastructures as well as pave the way to commodity storage. It should also be licensed in a modern way, similar to the cloud. An example is being priced per TB with all features enabled. The result would be a less disruptive adoption of SDS that would appear to be more gradual but actually be faster than the current SDS adoption rate.
This article sponsored by: FalconStor
FreeStor™, the new SDS platform from FalconStor®, is an example of a solution that is designed to deliver on these capabilities. As we discuss in our recent briefing note, FreeStor is an SDS solution that is based on code that has over 15 years of real world, proven use. Its feature set exceeds the capabilities of most storage systems and it provides unique features like data migrations, tape out, cross-system deduplication and any to any system replication.
It is a truly horizontal approach to deliver unified data services seamlessly across today’s mixed enterprise environments. FreeStor’s new pricing model ensures customers only pay for what they use in more of a subscription model approach. It is priced at a fixed $/TB for the capacity being managed by FreeStor. All the data services, capabilities, software upgrades, feature enhancements and 24 x 7 support are all included on an annual basis. At the end of each year, the customer will do a “true-up” on the capacity being managed by FreeStor using a built-in utility. If the total capacity is the same as the previous year, the annual price does not change. If the capacity being managed goes up or down, the difference is calculated at the pre-established $/TB rate and will become the new cost for the following year. It is a simple, predictable, no surprises approach that is right for SDS adoption.