Cloud-scale Object Storage – where do you store the cloud itself?

“The Cloud”, a ubiquitous term for near limitless storage and compute capacity, may seem like a fantasy to users but the infrastructure challenges it brings are very real indeed. Just ask the ‘hyper-scale’ companies that have developed their own systems to support this explosion of data stoked by internet and the Internet of Things. Scale-out, object-based storage architectures are ideal for these unstructured data sets but the commercially available solutions cloud providers and enterprise companies must use have limits. Now “Himalaya”, the latest storage architecture from Amplidata, promises to keep the object storage cloud ahead of the data growth curve.

The original AmpliStor software was installed on an architecture of 1U storage and controller nodes, connected by a GigE/10GigE fabric. Storage nodes held 12 hard disk drives for up to 48TB of raw capacity each and handled bit-level data integrity and automated repair processes. Controllers leveraged Intel Xeon processors to run Amplidata’s erasure coding software and handle all metadata functions for local as well as geographically dispersed clusters, all within a global namespace.


Now Amplidata has re-architected the platform to provide a new elastically scalable metadata software layer called “Scaler” that can be installed on nodes that sit in front of the storage and controller node clusters. Scaler consist of an SPX (Scaler reverse proxy) access layer that provides connectivity to the Internet and handles APIs, and data encryption, plus an a new SDB (Scaler database) layer to manage the object storage metadata, which can be expanded to control more than one hundred trillion objects. With Scaler, the storage capacity of a Himalaya-based system can reach beyond exabytes to zettabytes in a single global namespace.

The Himalaya architecture allows Scaler nodes to be added at any time to increase the number of objects under management, independent of the capacity or performance of the system. This means users have the ability to expand the infrastructure on multiple “axes”, so to speak. Amplidata calls this “3D Elastic Scalability”. They can add controllers to increase performance, add storage nodes to increase capacity or add Scaler nodes to expand the namespace. Scalers replicate metadata locally to protect against node-failure or between geographically distributed sites for DR purposes, while guaranteeing strong data consistency and partition tolerance in a multi-geo setup.

Himalaya also introduces a Cloud Services Gateway that connects at the Scaler layer and supports cloud provider provisioning and logistics needs. Each gateway can handle over 50 Exabytes of capacity (50+ Scalers) and more gateways can be implemented as required.

APIs supported include REST (currently S3 compatible and Amplidata’s AXR) plus CIFS, NFS and iSCSI, as well as file sync and share services through certified partners. Existing Amplidata users can upgrade to the Himalaya architecture seamlessly, adding the Scalers and cloud gateways as needed.

Amplidata is offering two versions of the Himalaya software. The Service Provider and OEM edition is designed for cloud service providers and OEMs that require higher levels of customization for integrating into their solution, supporting multi-tenancy, customer provisioning and services management (SLAs, monitoring, reporting, etc). It also provides a way to export metering info to the service provider customer portal, CRM and billing tools. OEM rebranding is also possible. The Enterprise edition is focused on providing easy “out of the box” deployments of smaller systems, with a list of certified hardware platforms and applications.


Amplidata also announced that Verizon has chosen the Himalaya architecture for their Cloud Storage Services offering. They looked at all available technology options, traditional storage vendors, new technology startups and open source solutions, and chose Amplidata for a number of reasons. They said the Amplidata solution was designed for the enterprise customer in mind, offering the security, fault tolerance, durability and flexibility they needed – at a massive scale. Amplidata is also the OEM solution for Quantum’s Lattus product and other technology partners.

StorageSwiss Take

This could be thought of as a ‘second degree’ scale-out architecture since each scale-out cluster of storage and controller nodes behind a Scaler node can itself be scaled out. For cloud providers and enterprises looking for commercial infrastructure solutions, this makes sense on a couple of levels. It gives Amplidata the ability to up their maximum scalable capacity under a single global namespace by a couple orders of magnitude, giving these companies a way to stay ahead of the cloud growth curve. But it also leaves their existing architecture intact below this new layer of Scaler nodes, enabling a seamless upgrade for existing Amplidata users.

Click Here To Sign Up For Our Newsletter

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , , ,
Posted in Briefing Note
One comment on “Cloud-scale Object Storage – where do you store the cloud itself?

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,221 other followers

Blog Stats
%d bloggers like this: