Author: Bjorn Kolbeck
The latest market data from 451 Research on containerized applications indicates that the adoption of containers will continue to expand and be worth more than $2.1 billion in 2019 and more than $4.3 billion in 2022 – a compound annual growth rate (CAGR) of 30%, according to the latest Market Monitor Cloud-Enabling Technologies – Application Containers study. Growth in the containers market and ecosystem is being driven by increasing enterprise interest to help application developers move faster, manage infrastructure more efficiently and meet digital transformation goals.
The exploding adoption of containers and the platform’s ability to make efficient use of system infrastructure delivers undeniable benefits to organizations seeking to develop and deploy applications at scale. Because of the number of moving parts in these environments, a container management system like Kubernetes is critical as it provides autonomous management of storage, networking, alerting, logs and more. With an increasing number of applications being deployed in containers today, DevOps professionals must be mindful about how they configure and provision the storage required for these operations.
Many administrators are tackling a wide range of Kubernetes related issues such as how to design and orchestrate their storage infrastructure across one or more clusters. While it is common to deploy storage as a set of fixed containers on a cluster, a more efficient approach is to oversee control of where the data is stored, which can support more efficient workflows. This is because as container deployments multiply, the throughput volumes using legacy storage products often result in bottlenecks that limit performance.
Where especially large volumes of data are present in enterprise-class Kubernetes environments, strategies for efficiently architecting storage is often based on the use case. For example, organizations deploying stateful containerized applications must couple these deployments with persistent storage. A novel approach to this is the overlay of a parallel storage data center file system (software) across affordable commodity storage as opposed to the installation and costly management of traditional storage systems which lack the configuration and scaling advantages of software-defined storage. Additionally, because containers are lightweight by design, allowing their creation and deployment within seconds, the associated storage must align to match operational demands.
In one example where a software-defined storage deployment in Kubernetes delivered measurable value, a large internet service provider (ISP) demonstrated exceptional gains in storage efficiency, scalability and cost control. Focused on application development and operations, the ISP needed to manage its growing storage needs while reducing costs and operational overhead. They made the decision to implement a software-defined storage platform to build a massively scalable, fault-tolerant storage infrastructure. In initial testing of sequential and random read/write patterns of several block sizes with 18 storage nodes and 70 compute nodes, the newly deployed parallel file-based storage framework supported OpenStack and Kubernetes containerized infrastructure, providing superior performance gains in benchmark testing.
In a second use case, a loyalty currency platform provider made the move from AWS to an on-premises Kubernetes deployment with a focus on tapping the operational benefits of software-defined storage. Looking for ways to eliminate complexity and administrative overhead from their infrastructure, the organization determined that implementing a container infrastructure and building a Kubernetes stack would give them the scalability and flexibility required without generating significant administrative overhead.
Their goals for implementing Kubernetes were to ensure scalability and maintain performance targets. They decided to migrate from a variety of on-prem hand-crafted, lovingly maintained virtual machines, to Kubernetes for a better container scheduling system. Although they were not looking for a storage system, they knew that storage would be needed to run Kubernetes at any scale to support persistent data. The loyalty currency provider considered a Ceph cluster they were currently using, but knew from experience that management of Ceph would be non-trivial. They began their search for Kubernetes-first storage solutions that were not expensive, did not require custom hardware or unusual, proprietary bits that would prevent the use of existing hardware.
Eventually, the organization deployed a data center file system that was cost effective, could be run in-house on existing hardware with minimal effort, and provided the functionality needed for their Kubernetes clusters. One of their pilot deployments was a complete rebuild of the partner-interfacing system which included an integration engine to talk to all the different airline, hotel, and loyalty partners. Originally, the interfacing system was a big monolithic app that was then split up into microservices and deployed on Kubernetes. This alone resulted in 10X more performance, but now they could also add far more capacity.
As cloud-forward organizations today are moving rapidly toward cloud-native infrastructure to run a growing number of containerized applications, the ability to support these environments with flexible, persistent storage will play a key role in ensuring continued adoption. Traditional storage appliances simply make capacity available to the container, but parallel throughput data center file systems allow for storage as an application in the cluster to accelerate a much broader range of containerized applications, including databases, scale-out apps and even big data analytics.