Cloud Native

Akamai’s Managed Container Service Brings Workloads Closer to Users

0

The enterprise container landscape is experiencing a seismic shift as organizations grapple with the complexities of deploying and managing Kubernetes at scale. In a recent episode of Cloud:Evolution, Ari Weil, VP of Product Marketing at Akamai, revealed how the company’s newly announced Managed Container Service is addressing these challenges head-on.

Breaking Down Operational Barriers

Akamai’s Managed Container Service, announced in February 2025, enables businesses to run data-intensive workloads closer to end-users while eliminating the operational overhead that has traditionally plagued enterprise Kubernetes deployments. The service leverages Akamai’s massive global footprint of over 4,300 points of presence across more than 700 cities worldwide.

“We’re taking advantage of the continuum of compute and the flexibility of optimized platforms,” Weil explained during the interview. “We’re simplifying the deployment and operations for our customers using containers, and then we’re allowing the customer to operate their application at scale.”

The Platform Engineering Challenge

As enterprises embrace platform engineering methodologies to accelerate development cycles, they’re encountering significant roadblocks. The first hurdle involves choosing the right container orchestration platform. While some organizations have experience with Docker, the Cloud Native Computing Foundation’s emphasis on Kubernetes has made it the de facto standard for cloud-native applications.

However, this standardization comes with its own set of challenges. Organizations must decide between using pure open source Kubernetes or proprietary implementations offered by hyperscaler clouds. The latter option introduces vendor lock-in concerns and operational complexity when managing multi-cloud environments.

Edge Computing Advantage

Traditional cloud providers excel at central scaling but struggle when extending to the network edge. This limitation becomes particularly problematic for data-intensive workloads such as AI inference, agentic AI, hyper-personalization, and media streaming applications that require low-latency performance.

“The limitations of the existing hyperscaler clouds, as much as they have capacity and availability to scale centrally, they really struggle when you go to scale out to the edge of the internet,” Weil noted. Their CDNs typically lack the reach and compute addressability that Akamai maintains through its distributed architecture.

Managed Kubernetes at the Edge

Akamai’s approach solves multiple problems simultaneously. Organizations can leverage managed Kubernetes services without sacrificing the portability benefits of open-source implementations. The platform’s global footprint enables granular scaling based on demand patterns across thousands of edge locations.

The service manages traffic bursts and demand spikes automatically, using customer-defined business logic and application requirements to guide distribution and scaling decisions. This approach allows development teams to focus on application logic rather than infrastructure management.

Cost and Performance Benefits

Beyond operational simplification, Akamai’s flat, predictable pricing model combined with generous egress allowances addresses one of the most significant pain points in cloud computing: unpredictable data transfer costs. This pricing structure, coupled with the performance benefits of edge deployment, creates a compelling value proposition for enterprises running distributed workloads.

Looking Forward

The launch of Akamai’s Managed Container Service represents a significant evolution in the container orchestration landscape. By combining the flexibility of open-source Kubernetes with the reach of a global edge computing platform, Akamai is positioning itself as a viable alternative to traditional hyperscaler approaches.

For enterprises struggling with the complexity of multi-cloud Kubernetes deployments or seeking to optimize performance for geographically distributed users, this managed service offers a compelling path forward. As edge computing continues to gain traction for AI and IoT applications, solutions like Akamai’s may well define the next chapter of cloud-native infrastructure.

The full interview provides deeper insights into platform engineering strategies and the evolving landscape of distributed computing. Watch the complete episode on TFIR to learn more about how edge-native container services are reshaping enterprise application deployment.


Transcript

Ari Weil: Akamai has announced a managed container service, which is a way for customers to provide us with an application in a containerized environment that we will host, operate, and distribute on behalf of the customer. Our customers tell us that they have an application that they need to reach a certain audience. They give us the business logic and the requirements for that application, and then we deploy it on the Akamai platform with the ability to scale the compute resources out to any edge location that we have on our distributed network of 4,200-plus points of presence. We can leverage any of the compute resources that we have from the core to the edge, and so we’re taking advantage of the continuum of compute and the flexibility of our optimized platform.

We’re simplifying the deployment and operations for our customers using containers, and then we’re allowing the customer to operate their application at scale so that they can reach and deliver an optimal experience to all of their users, wherever they are. This eliminates complexity, speeds time to market, and because of the flat, predictable pricing that Akamai has and our very generous egress allowances and low-cost egress, we also believe that it will dramatically improve the performance and cost profile of their application.

Swapnil Bhartiya: If you look at Akamai—of course, through the Linode acquisition as well—you have presence in 300-plus cities with 4,300 points of presence. What does it mean for enterprises who do want to deploy cloud-native applications with cloud-native architecture? What are the roadblocks that they hit that this managed container service and optimized reach will address?

Ari Weil: Absolutely. I think a lot of companies, as they start to move towards platform engineering as a way to accelerate their time to market and simplify operations, are running into some challenges. The first question is: if I want to use a container, what sort of container platform should I use? Some enterprises, for example, have experience with Docker. Akamai and the Cloud Native Computing Foundation really favor Kubernetes because of the portability that it enables and the speed of improvement of the platform with all of the open-source contributions that Kubernetes enjoys.

As you start to deploy any containerized environment, the first question is: do my engineers and developers understand how to build in Kubernetes and scale Kubernetes at scale? They can use Kubernetes as-is on the Akamai platform—formerly Linode—or they can decide that they want to start using a proprietary implementation from the hyperscalers.

That brings in the next consideration: how am I going to consider my application architecture over time? Do I need to take advantage of multiple clouds? In which case, using proprietary implementations of Kubernetes presents a challenge. There’s operational overhead, there’s a skills gap, and there’s the need to really think about which flavor of Kubernetes I’ll use for which elements of my application. If we focus just on using pure open-source Kubernetes, then you don’t have to consider those things with the Akamai platform, because you’ll build on Kubernetes as the open-source project, and then that is fully portable to any cloud as you start to scale and do your day-two operations.

When you think about managing a fleet of containers and different scalability and security requirements, the overhead and toil associated with scaling Kubernetes becomes a real challenge. With that, we have our application platform, which is also based on open source, that allows you to deploy golden templates that will give you ready-to-run application environments on Kubernetes that are much easier and faster for your organization to scale.

This brings the third question: when you scale, where are you scaling to, and what sort of outcome are you pursuing? If your goal is to realize low-latency application performance—and especially if you have a very data-intensive workflow, something like AI inference or agentic AI, or more mainstream use cases like hyper-personalization or even media streaming use cases—then the limitations of the existing hyperscaler clouds become apparent. As much as they have capacity and availability to scale centrally, they really struggle when you go to scale out to the edge of the internet, because their CDNs typically don’t have the same reach that Akamai has. They don’t have the addressability of the compute resources that Akamai has on our platform.

You end up with this challenge: do I want to now start to also—in addition to operating Kubernetes at scale—operate my own versions of, say, AWS Outposts, so that I can continue to use AWS services but have the ability to extend to where my customers are?

With the Akamai managed container service, you can take advantage of a managed Kubernetes service in the Akamai cloud. You can take advantage of the full footprint and presence that Akamai maintains in over 700 cities worldwide. Then you have the granularity to scale up where you see demand in those 4,300-plus points of presence that we maintain. We can manage a lot of the fluidity that exists with bursts in traffic or bursts in demand by basically managing the distribution and the scale-up of your container on your behalf, where we’re using your business logic and the requirements of your application to guide us. You then have the flexibility as a customer to just focus on operating the application itself, not the application and the infrastructure that it runs on.

From Cloud Foundry Lessons to Kubernetes Innovation: The Klutch Origin Story

Previous article

What Happened Today June 2, 2025

Next article