Every IT organization strives to economize, but in the high-density, rack-mounted world of hyper-scale data centers, the pressure to reduce operating costs can be especially severe. These are the scale-out, clustered environments trusted to support most public clouds and many private clouds as well. Buying lower-cost hardware is one way to reduce cost, but this workaround can have negative consequences, prompting operations managers to find another alternative.
Watch the on demand webinar “Hyper-scale Nightmare -The Potential Consequences of using Consumer-grade Flash in the Data Center”
Commodity hardware not enough
These hyper-scale environments address the challenges of big compute and big storage by building highly standardized infrastructures comprised of often hundreds of rack-mounted server-based systems. In these high node-count environments, there’s an opportunity to reduce cost substantially by saving money on the hardware used in every server, as evidenced by the popularity of low cost, ‘commodity’ hardware. But the pressure to reduce costs is unending, putting the focus on the cost of components within these rack-mounted servers as well.
Many hyper-scale data centers support web-based applications and other transaction-heavy processing where performance is critical, so most of these clustered systems use server-side flash (SATA or SATA SSDs) as fast local storage. For them, replacing that enterprise-level flash drive in each of their server nodes with less expensive, consumer-grade drives provides a simple way to lower cost. When these drive-level savings are multiplied by large numbers of servers in the data center, the economies of scale are substantial indeed.
But there can be a risk associated with this practice of using an SSD outside of the use case for which it was designed. Using consumer-grade drives could be called a “workaround” for the cost problem. While certainly not a ‘duct tape and bailing wire’ kind of solution, there may be some unintended consequences to this cost-cutting strategy.
Reliability and replacement
The most obvious is reliability. Any storage device is vulnerable to a power failure while data is ‘in flight’, that is until data has safely been recorded on non-volatile media. Enterprise flash drives have special circuitry to maintain power until this final write occurs; most consumer drives do not.
The result of this reliability issue is data corruption, which can be an even bigger problem when it occurs in the writing of metadata. These small, but critical pieces of data are essential for the operation of the drive itself. When metadata gets corrupted the drive may not run, or may not restart after a power outage. All this adds up to increased drive replacement, a process that adds to administrative overhead as well as drive costs.
Most consumer drives are designed for a mixed read and write workload, not the high-read workloads common in these environments. They also often use testing specs taken before the drive has been filled, referred to as the “Fresh out of Box” or FOB state, not the long-term operating condition that drives in an enterprise environment will see. The result is drive performance that’s often lower and less consistent than enterprise drives, a situation that just gets worse as the drive ages. Another alternative to enterprise flash drives is needed in these hyper-scale data centers.
For more information on this problem and the potential solutions, tune into the on demand Storage Switzerland webinar “Hyper-scale Nightmare: The Potential Consequences of using Consumer-grade Flash in the Data Center”. In it Storage Switzerland Founder, George Crump and Shawn Worsell, Product Manager at OCZ/Toshiba explain how using these low-cost drives can have some unintended consequences. You’ll learn about another alternative that can deliver the cost savings you need without the negative side effects of consumer drives.