Cloud Native

The Cloud Native Maturity Model didn’t mention AI at all in 2021. Now it’s woven throughout.

0

Guest: Danielle Cook (LinkedIn)
Company: Akamai
Show Name: KubeStruck
Topic: Observability

When the Cloud Native Computing Foundation (CNCF) released its maturity model in 2021, artificial intelligence was nowhere in the framework. Fast forward to 2024, and AI is woven throughout every level of cloud native adoption guidance. This shift reflects a fundamental question facing enterprise technology teams: how do you integrate AI tools safely and effectively across cloud native environments without creating new risks or inefficiencies?

Danielle Cook, Senior Product Marketing Manager at Akamai and CNCF Ambassador, offers an insider perspective on how the cloud native community is navigating this transition. As co-organizer of the CNCF Cartografos Working Group that maintains the Cloud Native Maturity Model, Cook has watched the framework evolve from AI-agnostic guidance to a roadmap that helps organizations determine where AI fits, at what maturity stage, and whether to deploy it across entire teams or start with targeted use cases.

The evolution of the Cloud Native Maturity Model tells a larger story about how open source communities respond to disruptive technology shifts. When Cook and her colleagues first assembled the framework in 2021, AI was emerging but not yet central to cloud native operations. The model focused on helping organizations assess their maturity across dimensions like observability, security, and platform engineering without considering AI-powered tools.

The latest version takes a fundamentally different approach. Rather than treating AI as a separate concern, the updated model integrates AI considerations across existing maturity dimensions. Organizations now receive guidance on where AI tools are appropriate, what maturity level they need to reach before adopting AI capabilities, and whether to implement AI broadly or in specific areas first.

This measured approach reflects broader community conversations happening within the CNCF. Cook points to the AI working group, which focuses specifically on safe adoption strategies. The group’s mandate centers on making teams more efficient and productive through AI while maintaining security and operational standards. This contrasts sharply with vendor announcements that often emphasize AI capabilities without addressing the implementation complexities or risks.

Cook’s involvement extends beyond the maturity model. As co-founder of KubeCrash, a virtual conference now in its eighth year, she has direct visibility into what cloud native practitioners actually want to discuss. The conference started as a two-hour event for people who couldn’t attend KubeCon in person but wanted access to quality content. It has since grown into a day-long conference that recruits speakers based on community demand.

For the past three KubeCrash events, one topic has dominated attendee requests: platform engineering. The intersection of platform engineering and AI has become particularly urgent as organizations realize that AI workloads require sophisticated orchestration, observability, and resource management. Platform engineering provides the foundation for deploying AI tools safely and efficiently across cloud native infrastructure.

This trend aligns with Akamai’s strategic direction. The company recently announced its Akamai Inference Cloud, which combines cloud infrastructure, security tools, and edge capabilities to enable inference workloads at the edge. Cook emphasizes that AI workloads must run on cloud native infrastructure, making the maturity model’s guidance on AI adoption particularly relevant for enterprises evaluating where to deploy machine learning models.

The gap between vendor messaging and community conversations reveals an important dynamic. While companies announce AI features and capabilities, practitioners focus on practical challenges around adoption maturity, safety protocols, and organizational readiness. The CNCF’s approach through its maturity model and working groups addresses this gap by providing frameworks that help organizations move beyond the hype to make informed decisions about AI integration.

For technology decision-makers, the Cloud Native Maturity Model’s evolution signals a shift from whether to adopt AI to how to adopt it responsibly. The framework acknowledges that not every organization is ready for AI at every maturity level, and not every use case benefits from AI augmentation. This nuanced perspective helps enterprises avoid the pitfall of adopting AI tools simply because they are available, instead focusing on where AI delivers measurable value without introducing unacceptable risk.

You Don’t Need To Write Code: The Non-Technical Contributions That Keep Open Source Alive

Previous article

Why Edge Inference Is Critical for Real-Time, Agentic AI | Ari Weil, Akamai

Next article