Is the Data Center Ready for Flash Archives? – Nimbus Data Briefing Note

Archiving, as a concept, looks great on paper. It alleviates primary storage capacity, simplifies data protection and lowers overall storage cost. The problem is when it is necessary to search for and retrieve data in that archive, or worst case an application wants to use archive storage to process data. The time it takes for these actions to occur is often unacceptable in the modern data center.

At the heart of the problem is the media most archive solutions use. In an effort to drive down costs, most solutions, sacrifice performance. The problem is the delta between the high performance of production flash media and the relatively low performance of hard disk or tape media used for archive is too great. A flash tier would close that gap but of course, the concern with flash is cost.

The advantages that a flash archive would have over the more traditional archive media choices are not only access time but also density, more capacity per square foot. Flash could deliver several multiples of density over hard disk drive and tape technology. For many organizations, the cost of building another data center is a far greater concern than the price difference in media. Imagine an archive system with dozens of petabytes of capacity per rack that also had near instant access times. Such a system would allow the organization to embrace freely an archiving concept to drive down costs with almost no noticeable performance impact on data recall.

Introducing Nimbus Data ExaDrive DC100 – 100 TB Flash Drive

Nimbus Data was founded almost a decade ago by Thomas Isakovich. Its initial focus was on creating complete flash storage systems but part of the creation was its own unique implementations of flash media. Over the last few years, Nimbus Data has put more focus on its hardware creativity and a result of that focus is the ExaDrive DC100.

The ExaDrive DC100 is a 100 TB flash drive based on the industry standard 3.5” drive form factor. Nimbus Data projects that a single rack of DC100’s will deliver 100 Petabytes of capacity, not just dozens of petabytes. The drives connect via standard Serial ATA. That means it is almost instantaneously compatible with many storage systems on the market today. The specific design of the DC100 multi-processor architecture supports this much higher per drive capacity.

Although designed for secondary storage workloads, the DC100 still delivers excellent performance. Each drive delivers 100K Read or Write IOPS and up to 500 MBps throughput. This equally balanced read/write performance is ideal for a wide range of workloads, from big data and machine learning to rich content and cloud infrastructure.

The ExaDrive DC series pricing is about the same as a typical SSD on a per TB basis. When factoring in the energy efficiency and density of the drives, cost savings abound. Nimbus Data claims 42% per terabyte reduction in total cost of ownership thanks to savings in power consumption, cooling requirements and rack space reductions.

StorageSwiss Take

There are many use cases for the DC100. It should be very popular for organizations needing high performance to support AI and machine learning, as well as big data processing. Cloud providers who are constantly space constrained should also be interested.

The flash as archive use case may come after the initial AI, ML and CSP adoption cycle but it could have the most potential. Every organization with more than 1 PB of data (that’s just 10 DC100s) could benefit from a data management strategy based on DC100s.

The challenge for data management and archiving is getting the organization comfortable with allowing the technologies to work. The perceived time delay in recalling data from a less expensive platform, especially when the organization is accustomed to flash performance, is often a deal breaker. The ExaDrive DC100 changes recall perception and delivers an archive tier that has recall times on par with production but without the cost. As a result, it should eliminate any resistance an organization might have to a data management strategy.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,809 other followers

Blog Stats
  • 1,209,928 views
%d bloggers like this: