Cloud Native

How AI Is Reshaping Kubernetes and the CNCF Ecosystem: Insights from Akamai’s Alex Chircop

0

Guest: Alex Chircop (LinkedIn)
Company: Akamai
Show Name: KubeStruck
Topic: Kubernetes

Kubernetes turned ten last year, but its role in the industry is evolving faster than ever. With AI workloads reshaping how infrastructure is built and scaled, the CNCF ecosystem is entering a new phase—one that blends cloud native foundations with the demands of modern inference and intelligent applications. At KubeCon Atlanta, Alex Chircop, Chief Architect at Akamai, shared how this shift is unfolding across technology, people, and community.

The cloud native movement has seen rapid evolution in just a decade. What started with containers and orchestration has expanded into a complex ecosystem of storage, networking, security, observability, packaging systems, and GitOps tooling. Few people have observed this transformation as closely as Alex Chircop, a long-time contributor, technologist, and now Chief Architect at Akamai. Speaking at KubeCon in Atlanta, he reflected on where the community began and where it is now heading—with AI emerging as the defining force.

Looking back at the early days of Kubernetes, Chircop remembers a time when the community was small enough that meeting rooms never filled up. Adoption was limited, and many organizations were still trying to understand whether containers were a fad or a meaningful abstraction. For Chircop and his co-founder at the time, the moment of clarity came early. “This is it,” he recalls saying when he first saw what Kubernetes was enabling. Cloud native would soon become the standard for modern application development.

Over the years, the CNCF ecosystem grew from a handful of projects to hundreds. Containers gave way to orchestration, which in turn demanded advancements across networking (CNI), storage (CSI), security, service mesh, observability, and more. Helm brought sophisticated packaging. GitOps and CI/CD pipelines streamlined deployments. With every new need, a new project emerged—or an existing one matured—to meet the expectations of enterprises aiming for consistency, speed, and portability.

But at KubeCon this year, one topic dominated every conversation: AI. Whether the focus was inference, intelligent agents, or distributed model serving, AI workloads have become inseparable from cloud native technologies. “Cloud native and AI are becoming synonymous,” Chircop explained. He often repeats a line he first heard at a previous event: “If AI is the new application, Kubernetes is the new web server.” For many teams, this has already become reality.

AI workloads are fundamentally distributed and require sophisticated scheduling. GPUs must be allocated with precision. Model-serving systems must operate reliably across clusters. New orchestration layers—such as Volcano or KubeRay—are emerging to meet the unique needs of inference and agent-driven applications. Chircop points out that more of these projects are entering the CNCF ecosystem because they rely on the same cloud native principles that shaped Kubernetes.

At the same time, the ecosystem continues to mature beyond the hype cycles that have defined different eras. Service mesh, observability, and even Kubernetes itself went through peaks of excitement and waves of growing pains. Today, many of these technologies are reaching a new stage where sustainability matters more than initial velocity. Funding models shift, engineering momentum changes, and communities adapt to keep vital projects healthy. According to Chircop, one focus area of the CNCF Technical Oversight Committee (TOC) is ensuring that essential but less “trendy” projects continue to thrive even if the spotlight has moved on.

The rise of AI is also reshaping how enterprises think about platform engineering. Internal developer platforms are becoming more opinionated and more complete. Observability is expanding to capture GPU performance, inference metrics, and model-serving behaviors. Security is seeing renewed pressure as AI workloads introduce new attack vectors, supply chain concerns, and regulatory expectations. Recent additions to the CNCF, like OpenFGA, reflect this increased focus.

A key challenge Chircop highlights is the tension between open source infrastructure and proprietary AI models. While model weights are often closed or restricted, the tooling, orchestration, serving layers, and infrastructure surrounding them are overwhelmingly open source. “We should distinguish between open-sourcing of models versus open-sourcing of the infrastructure tooling,” he said. The latter remains the CNCF’s domain—ensuring that teams can build, deploy, and run AI systems with open, portable components. Even if the models themselves differ, the underlying stack can remain transparent and community-driven.

As AI communities and cloud native communities converge, another challenge emerges: culture. Many AI practitioners come from backgrounds that do not overlap with open source norms. Their priorities, workflows, and expectations differ from those of the cloud native ecosystem. Meanwhile, cloud native practitioners are now being asked to support workloads that behave differently, scale differently, and require entirely new operational models. Chircop believes this tension will resolve through evolution rather than disruption. Existing projects like Envoy, Kubernetes, and observability stacks are already adapting. New projects like KubeRay or Casa will fill in the gaps.

From Akamai’s perspective, the intersection of these communities is already visible. The company joined the Kubernetes AI Conformance initiative—a major announcement from the KubeCon keynote. The initiative focuses on ensuring that Kubernetes can run AI workloads in a portable and measurable way across different cloud providers. Every major cloud vendor, including Akamai, committed to proving conformance. The goal is simple: teams should be able to deploy AI workloads anywhere Kubernetes runs, providing predictable behavior without bespoke engineering for each environment.

This conformance effort is crucial for the next generation of AI companies—whether they are large proprietary model providers or small startups building their first inference clusters. Kubernetes provides the portability they need. Cloud native observability gives them operational insight. Open source security frameworks protect their deployments. GPU orchestration standards help avoid vendor lock-in. Chircop argues that this is exactly the kind of foundational work the CNCF was created to support.

As we look toward the next decade of cloud native, Chircop sees continuation rather than replacement. AI will reshape priorities, workloads, and architectures—but the underlying principles of portability, openness, and community remain. Kubernetes is not going away. Instead, it will become even more central as infrastructure expands to meet the demands of new applications.

KubeCon itself reflects this shift. Ten years ago, Kubernetes was a new idea. Today, it is the platform on which new ideas are built. AI is the latest addition, and the ecosystem is adjusting accordingly. The CNCF remains the home where open source infrastructure evolves, adapts, and thrives—even as the industry changes around it.

Chircop’s perspective reinforces a simple truth: cloud native is not a static category. It is a living community, continuously shaped by its participants and the workloads they bring. With AI now defining the future of software, the partnership between these worlds will determine what the next decade of innovation looks like.

SIOS LifeKeeper v10 Unifies HA and DR Management Across Windows and Linux

Previous article

Avoiding AI Lock-In: How Mirantis Builds Flexibility into MCP Deployments | Randy Bias, Mirantis

Next article