Guest: Alex Chircop (LinkedIn)
Company: Akamai
Show Name: KubeStruck
Topic: Kubernetes
The AI boom has created an unexpected divide: AI companies building frontier models and cloud-native infrastructure teams operating at scale often speak different languages. At KubeCon in Atlanta, Alex Chircop, Chief Architect at Akamai, addressed this head-on—explaining how the newly announced Certified Kubernetes Conformance Program is creating a unified platform for both worlds.
Two Ecosystems, One Infrastructure
The divide is real. Major AI vendors like OpenAI and Anthropic aren’t CNCF members, and many AI practitioners come from data science backgrounds with little exposure to cloud-native principles. Meanwhile, cloud-native engineers have been perfecting container orchestration for a decade without necessarily focusing on GPU-intensive workloads or LLM serving patterns.
But as Chircop points out, the organizations running AI infrastructure at scale are deeply embedded in the CNCF ecosystem. “What you’ll find is that the organizations that are running those infrastructures are part of the CNCF, and I think that’s where the focus is,” he explains.
The Kubernetes AI Conformance Initiative
Announced during the KubeCon keynote, the Kubernetes AI Conformance initiative represents a critical standardization moment. The goal is straightforward: ensure Kubernetes can run AI workloads in a portable, reliable way across any conformant platform.
Major cloud providers, including Akamai, have already signed up and proven compatibility. This isn’t just symbolic—it creates practical interoperability for companies building AI applications. Whether deploying on Akamai’s distributed cloud, a hyperscaler, or hybrid infrastructure, teams can now rely on consistent behavior for AI workloads.
Why This Matters for Enterprise AI
For enterprises building AI capabilities, this standardization solves several critical problems. First, it prevents vendor lock-in at the infrastructure layer. AI workloads are resource-intensive and expensive—being able to move inference clusters between conformant providers based on cost, latency, or regional requirements is a competitive advantage.
Second, it brings cloud-native best practices to AI deployments. “It makes it easier for all of those different AI companies…to do this on Kubernetes in a reliable way, with the observability and the serving of LLMs and the GPU orchestration,” Chircop notes. Teams get battle-tested tooling for monitoring, autoscaling, and resource management—capabilities that took years to mature in the cloud-native ecosystem.
Building Bridges Through Standards
The Conformance initiative also addresses a talent challenge. As AI teams grow, they need engineers who understand both model serving and production infrastructure. By standardizing on Kubernetes, organizations can leverage existing cloud-native expertise while onboarding AI-specific capabilities like GPU orchestration and LLM serving frameworks.
For smaller companies building their first inference cluster, this is transformative. They inherit years of community innovation in scheduling, networking, and security without reinventing infrastructure primitives.
The Path Forward
Chircop’s perspective reflects a broader industry trend: AI and cloud-native are converging not through top-down mandates but through practical necessity. AI workloads demand the kind of sophisticated orchestration that Kubernetes provides. And Kubernetes is evolving to handle AI-specific requirements like heterogeneous GPU clusters and specialized networking patterns.
Organizations like Akamai are playing a crucial bridging role—bringing distributed cloud infrastructure that’s both cloud-native compliant and optimized for AI inference workloads. As the Conformance initiative gains adoption, expect to see more standardization around AI deployment patterns, making it easier for enterprises to move workloads and avoid infrastructure fragmentation.
The conversation at KubeCon suggests the two communities are no longer parallel tracks but increasingly integrated. For infrastructure leaders planning AI deployments, the message is clear: betting on Kubernetes conformance provides flexibility, portability, and access to a mature ecosystem that’s rapidly adapting to AI demands.





