Cloud Native ComputingContributory Expert VoicesDevOpsFeatured

Kubernetes Powering the Push to Edge Computing


Author: Wesley Reisz

Kubernetes has quickly become the standard for containerized workload orchestration, starting in the data center and cloud, and is now powering the extension to edge computing.

Increasing numbers of vendors are extending their offerings to the edge, from the big public providers (Azure IoT Edge, AWS Greengrass, and Google Cloud IoT Edge) to more specialized providers such as Section, FogHorn, and Mirantis, offering multi-cloud, multi-access edge computing platforms.

Traditional cloud computing is typically centralized and uniform while edge computing involves geographically dispersed points of presence and a heterogeneous architecture. While there is constant debate over where the edge is, it is perhaps more appropriate to discuss the edge as a compute continuum. The edge can thus range from a national or regional data center, to a cell tower, all the way down to an IoT device.

In order to efficiently and cost-effectively run infrastructure in the complex landscape of edge computing, a rethink of existing approaches to deployment and management of compute resources is required. This past year, we faced these challenges head-on by migrating our Edge Compute Platform from a bespoke Docker scheduler to a Kubernetes deployment.

Why is Kubernetes a natural fit for edge computing?

In addition to very low downtime and excellent performance, Kubernetes offers many built-in benefits to address edge compute challenges, including: flexibility, resource efficiency, performance and reliability, scalability, and observability.


Kubernetes reduces the complexities associated with running compute across numerous geographically distributed points of presence and varied architecture by providing flexible tooling that enables developers to interact with the edge how and where they need to.

With Kubernetes behind our platform, we’re able to run containers at scale through an agnostic network of distributed infrastructure providers, which in turn, extends full flexibility to our users to be able to run edge workloads anywhere along the edge compute continuum to meet the specific needs of their application. Thanks to Kubernetes, any workload that can be containerized can now be deployed at the edge.

Resource Efficiency

Containers are lightweight by nature and allow you to tap into the underlying infrastructure in a highly efficient way. However, managing thousands or, in many cases, millions of containers across a distributed architecture gets complex very quickly. Kubernetes provides the underlying tools to efficiently manage container workflows through automated networking, storage, logs, alerting, and more.

The Kubernetes Horizontal Pod Autoscaler is one particular feature that naturally lends itself to edge computing efficiencies; it automatically scales the number of pods up or down in a replication controller, deployment, or replica set based on latency or volume thresholds. Think of a point of presence in an edge location that needs to handle sudden traffic spikes, for example in the case of a regional sporting event. Kubernetes can auto-detect traffic from logs and provision resources to scale to the the rise and fall in demand. This not only takes the guesswork out of predicting and planning for infrastructure needs, but also ensures that you’re only provisioning for what your application needs in any given period.


Modern applications require lower latency than traditional cloud compute models can provide. By running workloads closer to end users, applications are able to recover valuable milliseconds to deliver a better user experience. As mentioned above, Kubernetes’ ability to respond to latency and volume thresholds (via the Horizontal Pod Autoscaler) means that traffic can be routed to the most optimal edge locations to minimize latency.

Reliability & Scalability

One of the major benefits of Kubernetes is that it is self-healing. In Kubernetes, you can restart containers that fail, replace and reschedule containers when nodes die, and kill containers that don’t respond to your user-defined health check.

Prior to Kubernetes, we experienced service disruption whenever we had to upgrade our workload because we had to close the TCP sockets bound to the workload containers. In Kubernetes, Services abstract this process, allowing us to start and stop containers behind the service while Kubernetes carries on handling traffic redirects to the healthy containers to avoid service interruptions.

Furthermore, because Kubernetes’ control plane can handle tens of thousands of containers running over hundreds of nodes, it allows applications to scale as needed, particularly suiting the management of distributed edge workloads.


Knowing where and how to run edge workloads to maximize performance, security and efficiency requires observability, but observability in a microservices architecture is complex.

Kubernetes offers full visibility into production workloads, allowing for optimization of performance and efficiency. The built-in monitoring system enables real-time insights (including transaction traces, logging, and aggregated metrics) with immediate provisioning determined by configuration settings. Beyond the resource metrics pipeline that reports on cluster components (e.g. Horizontal Pod Autoscaler) and utility, you can also use a full metrics pipeline, such as Prometheus, to access richer metrics.

Another area of observability that is of particular interest when it comes to edge computing is distributed tracing. Distributed tracing allows you to collect and build a comprehensive view of requests throughout the entire chain of calls made, all the way from user requests to interactions between hundreds of services. With this information, you can identify bottlenecks and opportunities for optimization.

Developers are the key to innovation at the edge

Hardware is an essential component to capitalizing on the potential of edge computing, but without software, the edge is just computers. As developers work to build and scale distributed systems, they require tooling that supports modern development workflows.

Because of the benefits outlined above, Kubernetes provides the building blocks to empower developers to build for distributed systems and ultimately engender better user experiences and greater efficiencies. By building our platform on top of Kubernetes, we’ve been able to give developers complete control and flexibility to run any workload anywhere along the edge continuum.