Guest: Alex Chircop (LinkedIn)
Company: Akamai
Show Name: KubeStruck
Topic: Kubernetes
What happens when AI workloads collide with cloud native infrastructure? According to Alex Chircop, Chief Architect at Akamai, we are watching cloud native and AI become synonymous in real time. Speaking at KubeCon, Chircop delivered a clear message. AI is the new application. Kubernetes is the new web server. This is not hyperbole. It is the technical reality shaping how organizations build and deploy AI systems at scale.
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
AI workloads are distributed by nature. They require sophisticated scheduling across heterogeneous hardware. They demand orchestration not just of compute resources but of inference workloads and future agentic systems. This complexity is pushing CNCF projects into new territory. Schedulers like Volcano and LLM orchestration platforms like Kueue are choosing CNCF as their foundation because Kubernetes provides the orchestration backbone these systems need.
But the shift goes deeper than just running AI on Kubernetes. The entire cloud native ecosystem is maturing in response to AI demands. Observability is evolving to track distributed AI workloads. Platform engineering teams are building internal developer platforms with opinionated components that make deploying AI applications easier. Security is getting renewed focus. Projects like OpenFGA are joining CNCF to tackle access control challenges specific to AI systems. The Cyber Resilience Act and similar legislation are forcing foundation projects to address compliance at the infrastructure layer.
Chircop points out a critical tension in this evolution. Many foundational cloud native projects like service mesh and observability tools are coming out of their hype cycles. They reached maturity. Funding slowed. Engineering focus shifted. Now these projects need sustained maintenance even as they become critical infrastructure for AI workloads. The CNCF Technical Oversight Committee is working to ensure these projects remain healthy as they transition from innovation phase to infrastructure phase.
Meanwhile existing projects are finding new life supporting AI use cases. Envoy is being adapted for AI API gateways. Storage and networking projects are optimizing for GPU-heavy workloads. The cloud native stack is proving flexible enough to support entirely new workload patterns without requiring a complete rebuild.
The real growth area is AI inference. Training models gets attention and investment. But inference is where most production AI systems live. Running inference at scale requires exactly what Kubernetes provides. Portable deployment across clouds. Automated scaling. Resource optimization. Observability and monitoring. This is why Kubernetes AI conformance is becoming critical. Organizations need confidence that AI workloads will run consistently across different Kubernetes distributions and cloud providers.
What does this mean for enterprise teams? If you are building AI systems you cannot ignore the cloud native ecosystem. The tooling is already there. The community has solved many of the hard problems around distributed systems at scale. You do not need to reinvent orchestration or observability or security. You need to understand how to apply cloud native patterns to AI workloads.
The convergence of AI and cloud native is not a future trend. It is happening now. Organizations that understand this will move faster. They will build more reliable systems. They will avoid costly mistakes. The foundation is Kubernetes. The ecosystem is CNCF. The opportunity is understanding how to bring them together.





