Guest: Alex Chircop (LinkedIn)
Company: Akamai
Show Name: KubeStruck
Topic: Kubernetes
The cloud-native ecosystem faces a pivotal question: How do you prepare infrastructure designed for traditional workloads to support AI-native applications? Alex Chircop, Chief Architect at Akamai, addresses this challenge head-on in a discussion at KubeCon, explaining how the CNCF is evolving its project portfolio without abandoning its core strengths.
Evolution Over Revolution
The CNCF’s approach to AI workloads isn’t about replacement—it’s about evolution. Chircop highlights how existing projects like Kubernetes are adding AI-specific capabilities rather than being displaced by new tools. The introduction of Dynamic Resource Allocation (DRA) enables better GPU orchestration, a critical requirement for training and inference workloads. Projects like Envoy are being repurposed to build AI API gateways, demonstrating how mature cloud-native tools can adapt to emerging use cases.
At the same time, purpose-built projects are entering the ecosystem. Casa, for example, orchestrates AI workloads across heterogeneous GPU clusters—addressing a challenge that traditional Kubernetes schedulers weren’t designed to handle. This dual approach ensures that organizations can leverage proven infrastructure while gaining access to specialized tooling for AI-specific requirements.
Infrastructure Versus Models: A Critical Distinction
One of the most important points Chircop raises is the distinction between open source AI models and open source AI infrastructure. While debates around model openness dominate headlines, the CNCF’s focus remains on the infrastructure layer—the orchestration systems, observability tools, and security frameworks that enable models to run at scale.
This distinction matters for enterprises. Whether using proprietary models from OpenAI or open weights from Meta, organizations need reliable, portable infrastructure to deploy and manage these systems. The cloud-native stack provides that foundation, with projects focused on LLM serving, GPU resource management, and inference cluster orchestration—all built on open source principles.
New Contributors, Evolving Communities
As AI workloads drive new contributors into the CNCF ecosystem, the community is adapting. Developers building AI-first applications bring different requirements than those focused on traditional microservices. They need specialized scheduling, advanced observability for model performance, and security frameworks that address AI-specific vulnerabilities.
The CNCF is responding by expanding its project landscape to accommodate these needs. New security drivers, observability tools tailored for AI workloads, and orchestration systems designed for GPU-intensive applications are all part of this evolution. This isn’t a separate AI track—it’s an integration of AI capabilities into the broader cloud-native ecosystem.
Why This Matters Now
Enterprises investing in AI infrastructure face a critical choice: build on proprietary platforms or leverage open source cloud-native tools. Chircop’s perspective suggests that the cloud-native approach offers a more sustainable path. By evolving existing projects and introducing purpose-built tools within a unified ecosystem, organizations can avoid vendor lock-in while maintaining flexibility as AI technologies continue to advance.
The shift also signals a maturation of the cloud-native landscape. As projects move beyond the hype cycle, the focus shifts to sustainability, production readiness, and solving real-world operational challenges. AI workloads are accelerating this maturation, pushing the ecosystem to deliver more sophisticated orchestration, security, and observability capabilities.
For decision-makers evaluating AI infrastructure strategies, the message is clear: the cloud-native ecosystem isn’t just keeping pace with AI—it’s actively shaping how AI workloads will be deployed and managed in production environments.





