Why HGST is now building Storage Systems

Historically, HGST (a subsidiary of WD) and other drive manufacturers have done an impressive job with innovation both from an incremental perspective (when you build millions of units, you get good at fine tuning a technology) and with larger scale changes, such as the company’s switch from air to helium-filled drives. As a foundational component to essentially every piece of IT gear, disk drives have to adhere to specific design and manufacturing standards.

But this drive-level focus is actually a restriction when it comes to storage system design improvements, as there’s only so much you can do at the drive level to improve things like power, performance, capacity, etc. However, at the drive tray or chassis level, more options for innovation are available.

More Opportunities to Innovate

This is the impetus for HGST’s Active Archive platform, the rack-level systems they’ll roll out next year to house the dense configurations of high-capacity drives that cloud providers and other hyper-scale environments need. By focusing on the array or the system instead of just the drives, HGST can fully leverage their unique abilities to innovate in the development of new storage technologies.


As an example, reducing drive vibration is a fundamental objective of storage systems designers, but vibration isn’t just a drive-level phenomenon. While drives are designed and tuned by the manufacturer to reduce vibration and avoid harmonic frequencies, each drive in an array impacts its neighbors’ vibration as well. If the manufacturer can look at the characteristics of the entire array instead of just individual drives, they can be much more effective at tuning drives to better control vibration. One result of reducing vibration is that data tracks can be made closer together, improving storage density and increasing drive capacity.

Drive Platters

HGST’s new 10TB drive has 7 drive platters and 14 surfaces on which to write data. If something happens to corrupt one of these surfaces, the solution is to replace the drive. But if those 14 surfaces could be managed in a parity scheme with the 100+ surfaces on the other drives in that shelf, it could enable the array controller to recreate that corrupt surface transparent to the hosts or applications using that array. Given the rebuild times for large drive RAID sets, this could be a much more appealing solution than replacing a drive and conducting a typical RAID rebuild.

Power and Cooling

Power and cooling are fundamental design characteristics as well. Being able to ‘think outside of the box’ (or outside of the drive in this case) and monitor temperatures in the array can allow system designers to more efficiently control fans and reduce power accordingly. These are just a few of the ways that drive manufacturers could leverage array-level data and create better products for the industry.

Data resiliency, vibration and power are certainly areas in which the system builders (the OEM customers of the drive vendors) can innovate. But I think the drive vendors are in a better position to exploit an array-level focus for a couple of reasons.

Innovation at the Array Level

First, drive manufacturers build millions of units every year and have some of the most advanced QA and test and development capabilities of any manufacturing industry. Plus their deep knowledge of the drive itself, the atomic unit so to speak, enables them to apply that understanding ‘up the stack’, to innovate at the array level, better than a systems vendor could, because they don’t have that drive-level understanding. But there also needs to be some change in the area of standards.

New Standards

Historically, drive manufacturers have sold to large OEMs and system builders. These assemblers required drives and the other components in a computer system to comply with industry standards. But now, large cloud and service providers are building their own infrastructures. Their focus at the array level has really driven the emergence of the ‘white box’, the commodity x86-based server chassis that are used for compute, storage, applications, everything in IT.

These new users (the hyperscale data centers) are interested in buying complete systems or racks of systems, not drives or other “bare metal” components. To them, the mean replaceable unit is the array or the system, not the drive. Their focus is in achieving maximum performance, reliability and efficiency, not designing the white box systems that populate their hyper-scale data centers. In response, the industry needs to take this step and allow the disk array to become a standard production unit, not just the disk drive.

The Permission to Innovate

If the drive vendors can sell the chassis or the array as the standardized unit, instead of the disk drive, they’ll get more latitude to build the system according to the requirements of their end user customers. They’ll get the permission to innovate in a new and more powerful way.

Of course, companies like HGST don’t need anyone’s permission to develop new technologies, but encouraging them to think outside of the box – the disk drive ‘box’ in this case – could allow them to exploit the knowledge they’ve developed from building millions of drives and apply it to making better systems. This will certainly make the cloud data center operators happy, but should also help their traditional OEM and system builder customers as well. As the commoditization of IT hardware progresses, they too are looking for innovative solutions at the system level.

Click Here To Sign Up For Our Newsletter

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,228 other followers

Blog Stats
%d bloggers like this: