Cloud Native

Inside Akamai’s Expanding Role in Cloud Native, Observability and AI Workloads | Danielle Cook

0

Guest: Danielle Cook (LinkedIn)
Company: Akamai
Show Name: An Eye on AI
Topic: Observability

Every KubeCon brings its own dominant theme. This year, the noise was unmistakable: AI is reshaping everything from observability pipelines to developer experience to the future of cloud native infrastructure itself. But with so much hype and so many tools, teams are struggling to separate what matters from what doesn’t.

In this conversation, Danielle Cook, Senior Product Marketing Manager at Akamai and CNCF Ambassador, breaks down the real shifts happening inside the ecosystem — from the evolution of observability to the rising urgency around AI readiness, platform engineering, and edge inference.

KubeCon has always been a showcase for emerging ideas, but 2024 and 2025 have pushed the community into a new phase. Observability, developer productivity, platform engineering, and AI are no longer separate domains. They are now parts of a single problem: how do teams stay in control as systems become more distributed, more automated, and far more complex?

For Cook, this year’s conference put that challenge on full display. She moderated a panel focused on “beyond the dashboard” observability — a conversation shaped not by vendors but by real practitioners who are dealing with signal overload, fragmented tooling, and unclear priorities. As she explained, it was important to move past the endless “observability” branding on booth banners and ask a simpler question: What should teams actually care about?

When every environment produces tens of thousands of metrics, logs, and traces, noise becomes the enemy. Cook noted that the panel deliberately brought together end users like engineers from Intuit and others who work hands-on with production systems. Their goal was to share how they identify what matters, how they tune signal-to-noise, and how observability can connect directly to business outcomes.

A second theme behind the session was representation. The panel was fully composed of women working in observability — an intentional choice, as Cook shared. The cloud native community is diverse, but visibility often does not reflect that diversity. By putting these practitioners on stage, the panel showcased the strong technical leadership already present across CNCF projects and end-user organizations.

Beyond observability, Cook highlighted a broader cultural strength of CNCF: inclusion driven by people, not just technology. She pointed to initiatives like the Deaf and Hard of Hearing working group, community-run programs, and the welcoming environment created by organizers and contributors. In her view, this is what keeps the ecosystem healthy, even as the technical landscape becomes more complex.

AI inevitably came up — not just as a marketing topic, but as a real operational challenge. Cook observed that every conversation at the event in some way touched AI, but teams are now asking deeper questions:

Do we need AI to help us observe? Are we observing AI workloads the same way we observe everything else? Or are we now running AI as a workload inside Kubernetes?

The answer, as she explained, is all of the above. AI adds yet another layer of complexity. Kubernetes environments were already hard to manage, and the introduction of AI pipelines, GPU scheduling, inference workloads, and automation logic makes them even more complicated. As she put it, anyone using Kubernetes has already accepted complexity; AI simply raises the stakes.

One major area of work she participates in is the Cloud Native Maturity Model, which she helped write and maintain through the Cartographist working group. The first version did not include AI at all. Today, the model has been updated to incorporate AI adoption across different maturity levels, reflecting how central AI has become to cloud native practices. Teams now need guidance on where AI fits, when to adopt it, and how to use it safely.

The CNCF community’s focus on safe, practical AI adoption includes the new AI working group, which is exploring how organizations can integrate AI without creating unnecessary risk. According to Cook, this is not about chasing headlines — it is about making people more productive without compromising reliability or security.

Her work extends beyond CNCF roles. Cook is also one of the organizers of KubeCrash, a community-run virtual conference designed for practitioners who cannot attend KubeCon in person. The event started as a small two-hour program but has grown into a full-day conference with strong speakers and active community input.

The trend she sees in KubeCrash sessions is consistent: platform engineering dominates the agenda, followed closely by AI. These are the real pain points for teams today.

With so many companies announcing AI and SRE-focused tooling at the show, Cook shared a grounded perspective: there is a difference between what is being promoted and what is actually being implemented in production. Industry surveys are conflicted — some say AI is transforming operations, while others show slow adoption. The truth, she said, is that we are still early. Success will depend on adaptability and willingness to evolve.

The conversation also turned to Akamai itself. While Akamai is widely recognized for its legacy as the creator of CDN and a major force in security and streaming, its presence at KubeCon has grown significantly in recent years. Cook works within the cloud technologies group, where she sees firsthand how deep the engineering expertise runs.

Akamai’s investments now extend into managed Kubernetes (LKE), developer-focused platform tooling, and its app platform project that helps teams stitch together CNCF components to build internal developer platforms. These tools are designed to help developers navigate complexity rather than add to it.

A major milestone Cook highlighted is the launch of the Akamai Inference Cloud. This offering brings together Akamai’s infrastructure, its long-established security capabilities, and its global edge footprint to enable businesses to run AI inference closer to users. With AI shifting from model training to real-time consumption, edge inference is becoming increasingly important for performance, cost, and user experience.

For Cook, Akamai’s trajectory is exciting because the company is positioned to support the “AI future” in a comprehensive way. She sees the next phase of AI not as training giant models, but as delivering fast, reliable, secure experiences to end users — and Akamai is building the infrastructure to make that possible at global scale.

As the conversation wrapped, Cook shared her excitement for the innovations ahead and her continued commitment to community-driven growth within CNCF. Whether through panels, working groups, KubeCrash events, or Akamai’s growing cloud presence, she believes the ecosystem will only get stronger as teams learn how to integrate AI safely and meaningfully into cloud native environments.

How Kentik Is Bringing Agentic AI to Network Observability: A Deep Dive With CPO Mav Turner

Previous article

SUSE and AWS Expand Amazon Linux With Enterprise-Ready Open Source Packages

Next article