The two key benefits of Software Defined Storage (SDS) are increased flexibility and improved storage economics. Ironically, for organizations that standardize on a SDS solution, their flexibility and cost savings are restricted by storage hardware, which of course is still needed. The problem is that the storage hardware is either provided by legacy vendors that already have integrated storage services or the IT professional is forced to build their own storage system, which they typically don’t have the time nor desire to do. To extract maximum value from the SDS environment new requirements are required of the storage hardware.
1 – Flexible Data Services
Most storage systems on the market today have some form of data services like thin provisioning, snapshots and replication. These services are necessary when the environment is small and can be serviced by a single storage hardware stack. While many storage hardware providers now include these services with their hardware, they are bundled into the cost and are not actually free. Their customers are “paying” for these services typically via overpriced hardware.
The problem is that these services are often replaced by SDS solutions, in fact most SDS solutions offer more advanced services than the basics that many storage hardware platforms provide. There are two key issues with this approach. First, there is an obvious cost disadvantage to the IT organization (they are paying for the same thing twice) and secondly the included services may get in the way of the SDS services, causing a performance impact or potential data loss.
Storage hardware vendors need to be more flexible in their delivery of these services. For example, they should be able to turn their storage software features off so as to not interfere with the SDS solution. Ideally, they should be able to unbundle them all together so that the hardware can be sold more competitively.
2 – Reliability
Beyond simplified management, one of the attractions of a SDS solution is reducing the cost of future hardware purchases. And it is true that thanks to SDS virtually any storage hardware should be able to be used and managed by the independent software. But the IT professional has to be careful how much they drive cost out of the equation. Quality still matters, and an unreliable storage system can be expensive.
While, thanks to SDS, data loss should be less of a concern, the extra steps that a SDS solution might take to insure data reliability like replication, advanced RAID or erasure coding all consume excess disk capacity. Instead, if the storage hardware is equipped with its own efficient RAID algorithm that allows solid data protection and rapid rebuild of failed media all while consuming less overall capacity, that should be factored into the systems total cost of ownership (TCO).
The single biggest cost of unreliable hardware is time. Replacing hardware takes time and hardware failures don’t occur when they are convenient to fix. Instead they often occur at the worst possible moment when a storage administrator has five other tasks that need attention. The time involved in stopping what was being worked on, identifying which component has failed, carefully removing those components to make sure to not cause a greater failure and then to begin a recovery process like a RAID rebuild all take time.
Although the SDS frees the organization from buying premium priced hardware, SDS still needs quality storage systems to make sure that the hardware savings are not erased by loss of administrator productivity due to hardware failures.
3 – Performance
Hardware that is designed for SDS also needs to deliver performance from both flash AND disk storage tiers. Ideally that media should be in the same system as well. Although many SDS solutions will allow the migration of data between hardware platforms, it takes time to transfer this data across the storage network. If the HDD and flash storage is in the same system then the transfer can be done internally.
Additionally the storage hardware should provide the ability to do its own internal automated data movement since many of the SDS solutions do not presently provide this. They will manage flash and HDD as two separate tiers and will allow for data migration between them but not all of them will do this automatically based on access frequency.
This automated data movement should be done via tiering instead of caching. Tiering allows for a more intelligent use of flash storage so that flash is only used when there is an appropriate return on investment to the flash tier. Data is stored uniquely on flash so as not to consume twice as much capacity. With caching, the entire cache tier is also on the hard disk tier. Given today’s 1TB+ flash caches this doubling of capacity adds up.
Finally, the performance of the HDD tier should still be relatively responsive. Most hybrid storage systems no longer invest in extracting the full performance potential from the disk tier, but simply count on flash to carry the performance load. A properly architected hard disk drive tier should deliver solid performance, reducing the amount of flash acceleration required as well as reducing the performance variance when there is a cache or tier miss.
4 – Scalability
While SDS solutions provide unified management across discrete storage hardware many can’t span capacity across those systems. And, practically speaking, what IT professionals actually would want to do that? Even with a single point of management more storage systems from a variety of vendors still has complexity associated with it. There are more vendor relationships to manage, more support contracts to understand and different ways to interact with the hardware for routine maintenance. Also, the inter-networking of these systems into the storage network becomes more complicated and consumes expensive switch ports.
Even with SDS, storage administrators should try to still have as few storage systems as possible in their environment. This means a storage system that can not only accept mixed media types but also scale to large capacities. But the capacity scaling should not impact storage performance; the storage system should be able to deliver nearly the same sustained performance in year 1 at its initial capacity as it will in year 5 with its maximum capacity.
Conclusion
Software may be eating the world but it still needs hardware in order to run and store data. This is especially true in the SDS market. Organization’s standardizing on SDS should also consider standardizing on storage hardware. While this sounds contradictory to the SDS mantra of no vendor lock in, it is also the most realistic way to extract value from the initiative. SDS still provides the ability to switch hardware vendors, when the organization wants to, but these shifts in hardware platforms should occur gradually, not every time a hardware refresh is needed. To be successful, the IT administrator needs to look for storage hardware that can unbundle storage services, provide high levels of reliability, maintain high levels of performance and be scalable enough as to limit the number of storage systems that the organization needs in the future.
Sponsored by X-IO Technologies
About X-IO
X-IO Technologies is a leader in performance-driven storage with the industry leading price, performance, and capacity ratios of comparable solutions on the market. They provide flexible data services that can be turned off when used with an SDS solution. Their ISE product family continues to provide a 5 year service guarantee and provide the capacity and performance scaling to meet the needs of any data center exploring or standardizing on SDS.


[…] [to continue, click HERE] […]
[…] [to continue, click HERE] […]