The Three Problems with Software Defined Storage

Software defined storage (SDS) promises to reduce storage capital and operational costs by abstracting data services from the storage hardware. To deliver on these promises, SDS typically enables the use of commodity storage which should lower storage acquisition costs. It also provides a common interface to data services regardless of the storage hardware used, lowering operational costs. While this may sound like a dream come true, data centers continue to purchase proprietary, turnkey, networked storage hardware. To reverse the trend SDS solutions should address three key problems that are keeping the data center from adopting them more broadly.

SDS Problem #1 – Leveraging Current Infrastructure

The data center is a busy place with IT personnel simultaneously pulled in many different directions. In addition there are embedded infrastructures and processes in place that are a trusted part of the IT professional’s day. Many SDS solutions expect the data center to “jump in with both feet” and replace their current storage hardware, infrastructure and processes with the SDS solutions own infrastructure and processes. They often expect the data center to abandon the shared storage hardware in favor of a commoditized hyper-converged architecture.

Many SDS solutions proclaim that they eliminate the need for the storage area network (SAN). These “no-SAN” solutions may be of interest to a greenfield data center or even a new project within a data center but not for the data center with a heavy investment in storage infrastructure. Considering that most data centers have a SAN of some sort, then the lack of SAN support actually becomes a roadblock to SDS adoption instead of an advantage.

To overcome this problem, SDS solutions need to enable a “crawl, walk, run” implementation strategy by leveraging current infrastructure, while adding unified data services, until more capacity is needed. Then the data center could augment the current infrastructure with commodity storage if it so chooses. SDS often asks the exact opposite by requiring that the data center switch to a hyper-converged architecture where storage is no longer on a dedicated storage system.

SDS Problem #2 – Incomplete Data Services

While most data centers do have a shared storage infrastructure, they also have multiple storage systems on that infrastructure. These systems are often bought for specific environments like virtual desktop infrastructure or database applications as well as a more general purpose storage system for the bulk of the data center’s needs. If these systems and their infrastructure can be leveraged, instead of replaced, then there would be incredible value in abstracting the storage intelligence and applying the same data services across all the storage systems in the environment. It would reduce training time and allow for better overall use of storage resources. But for true unification of services to be viable, the data services provided by the SDS solution would need to be at least as robust as what the storage systems themselves offer. The data center is not going to want to take a step backwards and lose features in order to have a unified data services interface.

The problem is that many SDS solutions lack completeness in their data services offering. A simple example is that many SDS solutions lack the ability to perform data replication to a remote location so that a disaster recovery copy of data can be created. Even rarer is the ability to migrate data into the SDS architecture itself. Data cannot be simply copied over, so some form of data migration facility from the old storage platform to the new should be provided. Otherwise the data will have to be restored from a backup copy.

The two features that all of these SDS solutions claim to provide is the ability to leverage commodity storage to lower the acquisition cost and to provide a single operational console for data services implementation. The lack of a complete and robust data service offering means that point storage systems will still be purchased to fill gaps that the new SDS solution can’t fill.

SDS Problem #3 – Lack of Cloud Economics

Finally, many SDS solutions claim that they provide the data center with a cloud like hyper-converged architecture, but few provide a cloud-like economics model. These solutions are often software only and most are licensed by capacity packs. Many will also charge an additional fee for advanced features.

This can be a problem because most data centers are never at the right capacity point to take advantage of pack pricing. For example a starter pack may provide storage service from 1-10TBs, and the next pack may support 10-25TB’s. What if the organization has 11TB’s of storage capacity? That means they need to upgrade all the way to the 25TB license for 1 more TB of support. Instead, SDS solutions should price on a very granular per TB subscription model that is verified once per year and all features should be included in the subscription price.

Another cost that goes undiscussed by most SDS solutions is the cost associated with the inability to support existing infrastructure and storage systems as described above. Many of these SDS solutions allow for the use of commodity drives installed in the same servers that provide the compute (hyper-converged). While this reduces future cost, it does require the purchase of flash solid state drives and hard disk drives to be installed in the existing server architecture.

If the SDS solution can leverage existing storage infrastructure and storage systems, the data center would be able to gain the operational efficiencies of abstracted storage intelligence without having to purchase any additional storage hardware. Combined with a licensing model that makes upfront costs more cloud like, the economics roadblock to adoption is almost completely removed.

Conclusion

To increase adoption rate, SDS solutions actually need to be less aggressive in their approach to the data center. Instead of forcing a move to commodity storage, they should improve the usefulness of the existing storage infrastructure and storage hardware. They also should provide a full array of data services that more than match what is available from the typical storage system. This would allow the SDS solution to be integrated gradually, initially for migration, and then for data protection and disaster recovery. After the solution has proven itself, IT would feel confident using it to provide common data services for all of their storage systems.

Finally, the economics have to make sense. The solution should be able to leverage existing storage and storage infrastructures as well as pave the way to commodity storage. It should also be licensed in a modern way, similar to the cloud. An example is being priced per TB with all features enabled. The result would be a less disruptive adoption of SDS that would appear to be more gradual but actually be faster than the current SDS adoption rate.

This article sponsored by: FalconStor

FreeStor™, the new SDS platform from FalconStor®, is an example of a solution that is designed to deliver on these capabilities. As we discuss in our recent briefing note, FreeStor is an SDS solution that is based on code that has over 15 years of real world, proven use. Its feature set exceeds the capabilities of most storage systems and it provides unique features like data migrations, tape out, cross-system deduplication and any to any system replication.

It is a truly horizontal approach to deliver unified data services seamlessly across today’s mixed enterprise environments. FreeStor’s new pricing model ensures customers only pay for what they use in more of a subscription model approach. It is priced at a fixed $/TB for the capacity being managed by FreeStor. All the data services, capabilities, software upgrades, feature enhancements and 24 x 7 support are all included on an annual basis. At the end of each year, the customer will do a “true-up” on the capacity being managed by FreeStor using a built-in utility. If the total capacity is the same as the previous year, the annual price does not change. If the capacity being managed goes up or down, the difference is calculated at the pre-established $/TB rate and will become the new cost for the following year. It is a simple, predictable, no surprises approach that is right for SDS adoption.

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Article
5 comments on “The Three Problems with Software Defined Storage
  1. Thanks George. Very informative.
    Dr. David Passmore of Gartner used to say “Virtualization is least effective the farther away it occurs from hardware”. So I think proprietary hardware storage virtualization solutions will be with us for a while longer…
    What is more decisive to me is the twenty years of history in the marketplace. Most of the software-defined storage solutions have been around more than a decade, including the IBM product, DataCore, etc. If they were effective in the marketplace, these products would have gained a lot more traction by now.

  2. Bruce says:

    I usually enjoy reading Swiss Storage articles. But I’ve never seen so much misinformation and FUD in one article before. I get that the objective is to position Falconstor as a solution which gets round these ‘SDS’ problems. That’s if they were problems in the first place. I agree that leveraging current infrastructure is a challenge, as in many cases a client didn’t build their current DC compute infrastructure to ‘maybe’ handle SDS in the future, especially if they had outsourced their storage to a SAN with dedicated resources. However, while many of the SDS players have in the past used this idea to accelerate perceived ROI of their solutions, it’s quite clear now that they have seen their SDS tech is largely used in implementations with new third party hardware and do market it as such. This doesn’t make it any less attractive, but it does mean that timescales for implementation have to be more thoughtfully considered. The economics of the software/hardware abstraction remain in place. On points 2 and 3, I guess you are looking in a vastly different part of the SDS market to where I’m standing. I have two very strong SDS solutions in my portfolio, their data services are VERY complete and they charge from 1TB, where unlike traditional arrays which need to add $$$$ shelves at a time. One on my SDS partners can add 1TB useable (2 x 1TB HDDs mirrored, 2TB s/w licence and Support) of enterprise class storage for less than $2,000! Find me a traditional array that can do that! George – you are very respected and know this market like the back of your hand – but even you must have cringed a bit while writing this up. Whatever pays the bills though right?

    • George Crump says:

      Bruce,

      Thanks for reading and commenting. Great comments like this often inspire me to write a follow on blog and yours made that happen. So thank you. But I don’t want to keep you waiting here are the highlights:

      On your first point it seems like we agree. Many, not all, SDS solutions require a new server side architecture instead of leveraging the existing shared architecture. I was careful to say “many”, not all. The purpose of this article and many like it on our site was not to spread FUD but instead to make our readers aware of potential (not guaranteed) shortcomings and then make sure that the solutions they consider meet their specific needs.

      On the second point, I stand by my assessment that there is a wide variation in feature sets that are available from SDS vendors. That said a few are fairly complete. A common missing feature though is migration, without it you have to either replicate data at a block level, use some sort of separate migration tool or recover from a backup copy.

      On the third point, as we move into a software-defined era we are trying to raise awareness, not FUD. We just want the IT professional to make sure that they are considering SDS solutions that are as granular in nature as possible in the way they charge for licensing.

      On your final statement we make it very clear when a vendor has asked to sponsor an article. The final article is always something that we can stand behind and we feel has value to the community. I stand behind the points raised and consider them a valid set of tests that an IT professional should apply when they consider a software defined solution. It sounds like your solutions would also pass these tests and that’s fine, we never said that there was only one SDS solution that does.

      If you disagree with our assessment, we also provide the vehicle for you to voice that disagreement, which you did. And we allowed to be posted, but that does not mean that we were somehow swayed by the sponsor. It simply means that you and I disagree, which two professionals will do occasionally.

      Again, thanks for reading and commenting.

      George

  3. Bruce says:

    Hi George, I’ve re-read your article and very polite reply (makes me looks a little agressive in comparison – sorry, I blame the stress!) and I do see where you are coming from, but the original still feels a bit OTT with regard to your slightly pessimistic assessment of SDS technological landscape. I appreciate the commercial angle, FalconStor releases a ‘new’ (can I call it that?) solution and you’ve been asked to position an angle on SDS which emphasises certain potential short-comings which they solve. May be you should have called it ‘The Three Potential Problems with Software Defined Storage’, it just seems to cast a bit of a black cloud over SDS, where there are many really good solutions out there that really don’t fall foul of the points you mention. But enough of that. Thank you for your reply. I look fwd to other articles on your site.

  4. Larry Freeman says:

    Good points raised George. Regarding your 2nd point, this statement says it all: “But for true unification of services to be viable, the data services provided by the SDS solution would need to be at least as robust as what the storage systems themselves offer.”

    And therein lies the problem. There is no product that can extract sophisticated cross-vendor data services across array vendors like EMC, NetApp, HDS, IBM, HP, et al. At best the products from FalconStor or DataCore can perform higher-level management functions using a unified interface. Useful in some cases, but incorrectly called SDS, as you point out.

    It looks like the future of SDS will be one where vendors will offer variants of their OS residing on different platforms: traditional arrays, hypervisors, and inside public clouds. With a single OS, all of the data mgt services will be uniformly available through all points on an SDS data fabric. You’ll still be dependent on a single vendor, but that vendor would bring the SDS flexibility customers are looking for by spreading their OS across several variants.

    Larry

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,228 other followers

Blog Stats
  • 1,543,149 views
%d bloggers like this: