The Impact of Extremely High Density SSD Drives – Viking Technology Briefing Note

The next generation of solid state drives (SSD) are coming to market. Some are focused on performance and are leveraging NVMe connectivity to reduce latency. Others are focused on extremely high capacity (50TBs+) and are designed for dense data center storage that powers analytics and big data processing.

The Incredible Shrinking Data Center

As they continue to create and process an ever increasing amount of data, one of the biggest challenges facing organizations today is controlling the size and power consumption of their data centers. Many organizations have a policy that states, for every new piece of technology brought in and old piece of technology has to be removed. Ultra high capacity SSDs can shrink the size of the storage footprint substantially.

With Ultra-high capacity SSDs a simple 4U 24 drive server will be able to store 1.2 Petabytes (PB) of data. Consider a three node hyperconverged environment equipped with these servers and drives. In 12U the organization has access to 3.6PBs of storage. Even with most cautious of data protection schemes, the result is potentially more capacity than most data centers need.

The Cost Impact

The expectation is these drives on a per GB cost comparison will be more expensive than equivalent capacity but smaller current SSDs and hard disk drive (HDD) technology. But consider these drives will take up less physical space and are expected to require less than 16 watts of power when active. When the total cost of delivering PBs of capacity to the organization, these drives are a bargain. At such a low watt/GB or watt/TB utilization, one can look to comparing this even to cold storage applications

The Performance Impact

From a per drive perspective these drives are going to deliver about 60,000 Read IOPS and 15,000 write IOPS. Fast, but not the fastest drives on the market. However, given that the drive is not looking to compete directly with SSD but rather a HDD data center replacement, performance is more than enough. The other challenge is when a 4U storage system has over 1PB of storage, the expectation is that storage system will have a lot of workloads coming at it. More than likely the CPUs and the storage software that comes in the unit won’t be able to handle all the potential IO.

The Ultra-High Density SSD Use Cases

A system that uses ultra-high density SSDs won’t be expected to support only high transactional workloads. Cloud providers will leverage these systems purely for the density they can provide. The cost savings of not having to build the next data center will be all the return on investment they need. The next server could cost well over one million dollars if a data center has to be built for it.

Systems equipped with these drives will be used for read-intensive workloads where data is written once and then analyzed again and again. Projects in the data center that leverage Hadoop, Spark, Splunk etc. are ideal candidates.

A storage system equipped with these drives could also appeal to medium or large data centers. Again, a three node hyperconverged system equipped with these drives could shrink an organization’s entire data center to a single rack. Most legacy data centers don’t need the write IOPS and they don’t tend to have workloads that are continuously writing data. There tends to be a mixture of an occasional write peak surrounded by a bunch of reads, which means these systems equipped with average processing power could meet the challenge.

These drives are also SAS attached, which is logical given their size and performance capabilities. But if these drives are coupled with a smaller tier of NVMe drives and then managed by software so writes initially go to the NVMe tier and then de-stage to the ultra-high density SSD tier, the combination could be a perfect match for many data centers.

StorageSwiss Take

Ultra-high density drives like those from Viking Technology promise to have a major impact on not only storage architecture design but also data center design. Hybrid storage, with the ability to move data from a fast tier of storage to a slower tier of storage, is now suddenly in vogue again. Hyperconverged architectures also will gain some potential since most scale because of capacity issues, more so than any other driver. The ability to scale at a slower pace is going to simplify the hyperconverged model even further.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,148 other followers

Blog Stats
%d bloggers like this: