The Core Concept: Distributed cloud, real-time personalization at the edge, and operationally invisible Kubernetes platforms represent the three highest-value architectural opportunities available to enterprise AI teams in 2026.
The Guest: Danielle Cook, Senior Manager at Akamai and CNCF Ambassador
The Bottom Line:
- Distributed cloud is transitioning from an advanced option to the default application architecture for AI workloads — organizations that design for centralization will be redesigning sooner than they expect.
- Real-time personalization is already happening at the edge in retail and travel; enterprises that treat personalization as a batch or post-hoc process will lose ground to those making decisions in the moment.
- The platform layer opportunity is making Kubernetes invisible — opinionated platforms and golden paths that absorb operational complexity so developer teams can deploy AI anywhere without friction.
Speaking with TFiR, Danielle Cook of Akamai defined the current state of enterprise AI infrastructure opportunity and outlined three architectural bets that will separate leaders from laggards in 2026.
WHAT ARE THE BIGGEST AI INFRASTRUCTURE OPPORTUNITIES FOR ENTERPRISES IN 2026?
Cook’s first opportunity reframes distributed cloud not as an advanced architectural pattern but as the new baseline. When AI inference runs closer to users, organizations gain responsiveness and resiliency simultaneously — two outcomes that centralized architectures force teams to trade off against each other. “When you’re running AI workloads, you’re running inference closer to the users. You’re going to improve responsiveness and resiliency at the same time, and you’re going to make the experience great for your customers.” Critically, Cook notes this applies equally to external customers and internal users — any person or system that needs a quick, real-time experience benefits from this architectural shift.
Real-Time Personalization at the Edge
The second opportunity is already visible in the market. Cook points to retail and travel as sectors where AI-driven decisions are happening in the moment — not aggregated and acted on after the fact. Distributed cloud is the infrastructure layer that makes this possible. Whether an organization is building an AI application to sell to vendors or deploying AI capabilities in-house, the distributed cloud consideration is the same: the architecture must support decisions at the moment of interaction, not downstream. “Decisions are happening in the moment, not after the fact.”
Making Kubernetes Effectively Invisible
The third opportunity sits at the platform layer and has significant implications for engineering productivity and AI deployment velocity. Cook argues that the real unlock is not Kubernetes itself — it is the opinionated platform built on top of it that removes operational burden entirely. Through internal developer platforms (IDPs) and golden paths shaped by the cloud native community’s maturation, teams stop managing infrastructure and start deploying AI anywhere it needs to run. “Your teams can just be deploying AI anywhere.”
Broader Context: How These Opportunities Connect to Akamai’s 2026 Strategy
These three opportunities are the direct counterpart to the challenges Cook outlined in her full TFiR interview — centralized architecture failure, day-two operational complexity, and the collapse of request-response models under agentic traffic. Each opportunity is a structural response to a structural problem. Distributed cloud answers the latency and resiliency failures of centralization. Real-time personalization is only achievable once inference is placed close enough to users to execute in the moment. And invisible Kubernetes addresses the operational complexity that currently slows AI deployment across teams.
Akamai’s own position in this landscape is deliberate. Its managed Kubernetes engine (LKE), GPU-backed infrastructure, and distributed edge are designed to deliver precisely this combination: any AI model, running wherever users are, with the developer experience abstracted away from the underlying infrastructure complexity. “We want to make AI inference workloads great. We want our customers’ experiences to be amazing.”
Watch the full TFiR interview with Danielle Cook here.





