Containers have improved an organization’s ability to rapidly develop, test and deliver applications. In many cases however, storage is a boat anchor slowing the whole process down. As enterprises increase their use of container technology and begin using the technology in production, IT planners need to reexamine their storage infrastructure and make sure it can address the challenges that containers create.
Containers are often created to respond to a particular request or a certain purpose and then when done are quickly removed. The idea is for the container to only consume resources while it is servicing the request and then return those resources when the task is complete. The surest way to ensure the release of resources is to remove the container.
While transient use of compute, memory and network resources is ideal, data has gravity. In many cases the data associated with a container needs to remain, or persist, so that another container or container(s) can use it later. The need for persistent containers is especially apparent in distributed database applications like MySQL, MongoDB, Cassandra and others. Job #1 for a container focused storage solution is to provide data persistence.
The storage architecture also needs to match the deployment model of containers better; fast and dynamic. The storage solution needs to have pre-configured profiles so that upon creation, a container gets the exact storage profile it needs, or it automatically reattaches to a persistent volume. For example, a database may need higher performance storage and higher levels of availability, whereas a container managing unstructured data may need only basic performance, and limited availability but inexpensive, high capacity storage.
Another key capability is data mobility. With most container technology, when a container is moved the associated data volume does not move with it. The storage architecture needs to track container movement and make sure that the container can still get to its data.
Container mobility solves part of the availability concerns of distributed databases. But again, the moved or newly created container has to be reattached to data. If that data was/is on a fallen server, then it’s useless. The container storage solution needs to make sure that data is redundantly available across nodes and that upon a node failure, containers are reconnected with the redundant copies of that data.
Mobility also means multi-locations. Containers are designed to run virtually anywhere. A container storage solution needs to be multi-cloud. The software needs to run on-premises or in the various public cloud providers.
Finally, storage solutions focused on solving container challenges need to provide Quality of Service (QoS) capabilities. Because of the number of containers that can be present at any given point in time and the fluctuation in IO demand, QoS is probably more important in a container environment than any other.
Storidge Inc. is a software defined storage company that seeks to revolutionize the way containers interact with their storage. Their product, Container IO (CIO), is a hyperconverged solution focused on delivering a storage layer that supports physical, virtual and cloud storage with a primary focus on container-based applications. The software installs on each server (node) in the container orchestration cluster.
CIO will automatically detect available storage resources. The resources can consist of drives locally attached to the cluster nodes, ephemeral drives from a public provider or network attached (LUN from a SAN or an EBS volume). CIO then classifies those resources by type (hard disks, flash drives, and NVMe devices) and then aggregates the resources into virtual storage groups. CIO’s QoS capabilities automatically move data between storage groups to ensure performance and availability expectations are met.
CIO integrates with the container orchestration system through a plug-in API. Creating profiles ensures that upon creation containers automatically get specific storage configurations to meet performance, protection and capacity expectations.
For enterprises looking to host production workloads on their container environment, CIO delivers capabilities they are accustomed to in more traditional systems; volume management, redundancy and availability, quality of service controls, shared storage efficiency and pooled performance via aggregation.
Containers are the next step for organizations looking to build more efficient and more nimble data centers. The challenges are twofold: a solution that continues to support of existing storage platforms while seamlessly enabling container capabilities that address the changing data center landscape. Storage capabilities solely focused on container environments are not robust enough to warrant serious consideration by the enterprise. Solutions like Storidge Container IO bridge that gap, creating a flexible infrastructure that delivers the enterprise grade capabilities the IT professionals expect for the modern data center.