Cloud Native

Kubernetes at 82% Adoption: Why Maturity, Not Technology, Now Defines Cloud Native Success

0

Guest: Hilary Carter (LinkedIn)
Company: The Linux Foundation
Show Name: The Source
Topic: Cloud Native

Kubernetes has crossed a threshold. It’s no longer a differentiator—it’s invisible infrastructure, as foundational as the Linux kernel beneath it. The 2025 CNCF Annual Cloud Native Survey  reveals 82% of organizations now run Kubernetes in production, up from 66% just two years ago. But the more striking finding is how Kubernetes has become the backbone of AI infrastructure, with 66% of organizations running generative AI workloads relying on it for inference. Hilary Carter, Senior Vice President of Research at the Linux Foundation, explains why this shift from differentiator to default changes everything about how we think about cloud-native success.

The Invisible Infrastructure Moment

When Kubernetes becomes invisible, that’s a sign of maturity—not irrelevance. Carter draws a direct comparison to the Linux kernel: “That’s a very interesting parallel. It shows a lot of similar properties to the Linux kernel and is somewhat invisible. I think that is true for mature, foundational infrastructure, and that’s really where we are with the Kubernetes project.”

This invisibility represents success and ubiquity. The jump from 66% to 82% production adoption in just two years is a major gain in Kubernetes proliferation. “As more and more organizations modernize their core infrastructure and recognize the value of transitioning to cloud-native practices, processes, and infrastructure, it’s really just a sign of the times,” Carter notes.

The verdict is clear: Kubernetes is now “the de facto industry standard for cloud-native orchestration and production and deployment.” That’s a remarkable evolution for a project that, not long ago, was considered complex and difficult to adopt.

Why AI Workloads Run on Kubernetes

Perhaps the most telling finding from the survey is the intersection of Kubernetes and AI infrastructure. The data shows 66% of organizations using generative AI rely on Kubernetes for some or all of their AI workloads. But these organizations aren’t building models from scratch—they’re running inference workloads.

“The primary activity that they’re doing is training inference workloads. They’re not building models. They’re running inference workloads themselves, and doing so on Kubernetes to a significant extent,” Carter explains. This distinction matters. Building foundation models is expensive, resource-intensive, and time-consuming. Running inference workloads on existing models is more practical and cost-effective—and Kubernetes excels at exactly this type of workload.

Why is Kubernetes so well-suited for AI infrastructure? Carter points to its core strengths: “Kubernetes has always been exceptionally good at orchestration, and it’s been exceptionally good at resource-intensive workloads—running those workloads at scale and in decentralized environments.”

The project has found what Carter calls “a phenomenal intersection with AI use cases.” It became “the right foundational project at the right time in the innovation landscape,” enabling GPU scheduling, auto-scaling, workload isolation, and seamless integration with other processes. These capabilities make Kubernetes “understandably, an incredibly useful project right now” for organizations scaling AI workloads.

The End of the Early Adopter Era

With 98% of organizations having adopted cloud-native techniques, the early adopter era is definitively over. Cloud native is no longer a competitive differentiator—it’s table stakes. But that doesn’t mean every organization is seeing the same value from their cloud-native investments.

“I think it’s much more than just adoption. It really is about maturity, and it’s about the culture of that project within an organization that really sets it up for success,” Carter emphasizes. This shift from adoption to maturity represents a fundamental change in how the industry thinks about cloud-native success.

The Four Stages of Cloud-Native Maturity

The CNCF survey revealed four clear categories that define maturity within the cloud-native landscape: explorers, adopters, practitioners, and innovators. These stages aren’t arbitrary—they’re defined by specific practices and cultural characteristics.

“What’s fascinating is the extent to which that progression through the maturity stages is marked by clear practices like GitOps, continuous integration, continuous delivery, automation, platform engineering,” Carter explains. Moving from explorer to innovator requires more than implementing tools. It requires cultural transformation.

“It’s more than tooling, but importantly, it’s also about the culture of transformation and how we manifest successful technology adoptions through these other practices, like creating cultures that support CI/CD, that support GitOps practices and their adoption and support cross project collaboration,” Carter says. “That’s really what defines success.”

How, Not What

The key insight from the maturity data is straightforward but profound: “Cloud-native techniques are prolific with this 98% figure, but it’s about so much more than adoption. Really, it is about the how, not just the what, that defines success.”

This reframes the entire cloud-native conversation. Organizations can no longer differentiate themselves simply by adopting Kubernetes or implementing containers. Every competitor has done the same. The differentiator now is how effectively you implement these technologies—the practices, processes, and cultural factors that determine whether cloud-native adoption delivers real business value.

Organizations at the explorer stage might have adopted Kubernetes, but they lack the cultural practices that make it effective. Adopters have moved further, implementing some best practices but not yet achieving full integration. Practitioners have mature processes in place, while innovators have fully integrated cloud-native practices into their organizational culture and are pushing the boundaries of what’s possible.

What This Means for Infrastructure Leaders

For organizations navigating cloud-native adoption, the message is clear: focus on maturity, not just technology. Having Kubernetes in production is necessary but insufficient. The real work is building the cultural practices—GitOps, CI/CD, automation, platform engineering, cross-project collaboration—that separate practitioners and innovators from explorers and adopters.

This also means rethinking how you evaluate cloud-native success. The relevant questions aren’t “Have we adopted Kubernetes?” or “Are we running containers?” The questions are: How mature are our practices? Do we have a culture that supports continuous integration and delivery? Are we enabling cross-project collaboration? Have we moved beyond tooling to transformation?

For organizations running or planning to run AI workloads, the data validates Kubernetes as the foundation for scalable AI infrastructure. With 66% of peers already running inference workloads on Kubernetes, it’s become the proven path for organizations that need to scale AI capabilities without the cost and complexity of building models from scratch.

The invisible infrastructure moment has arrived. Kubernetes is no longer the story—how you use it is.

The Broadcom VMware Alternative: Why anynines is Replacing Expensive Enterprise Platforms

Previous article

Thierry Carrez Predicts Digital Sovereignty and AI Disruption Will Reshape Software in 2026 | TFiR

Next article