Guests: Prenil Kottayankandy | Zeke Dean
Companies: Akamai | Redpanda
Show Name: Cloud: Evolution
Topics: Edge Computing, Agentic AI
Real-time AI applications promise millisecond-level decision-making, but most enterprise architectures can’t deliver. The problem isn’t compute power or model quality—it’s the data layer. When agentic workloads are running at the edge but data is sitting in centralized cloud regions, every round trip adds latency that kills performance. For AI agents that need to continuously refresh context and execute multi-step inference calls, that lag is unacceptable.
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
In a recent discussion, Prenil Kottayankandy, Director of Business Development at Akamai, and Zeke Dean, Senior Partner Solutions Engineer at Redpanda, broke down how their partnership addresses this architectural gap by co-locating real-time streaming infrastructure with distributed edge compute.
The Data Proximity Problem
For developers building AI agents, IoT automation, or personalization engines, the traditional approach of running data streams in centralized regions creates a fundamental mismatch. “If you’re a developer and an existing Akamai customer trying to build a real-time streaming application, an AI application, or an agentic application, you need access to data really quickly. You need access to data in milliseconds,” Kottayankandy explained.
The solution is straightforward but technically demanding: put the data where the compute lives. With Redpanda’s streaming platform running on Akamai’s distributed cloud infrastructure, developers can now access high-performance real-time data streams directly where they’re building applications—eliminating the round-trip penalty that plagues centralized architectures.
Performance Gains and TCO Impact
The partnership isn’t theoretical. Akamai has already deployed Redpanda for its own internal products, seeing a 55% improvement in total cost of ownership. “We’ve seen significant gains. We’ve seen about a 55% improvement in TCO just by leveraging and adopting this new technology,” Kottayankandy noted. “Some of our existing products already run on this technology, and we believe the benefits will compound as we have more customers, more data, and larger applications being built on this platform.”
Dean emphasized that Redpanda’s performance characteristics are critical for edge deployments. “We are the fastest streaming technology out there. And not only are we the fastest, but we also deliver consistent low latency. So from P99 to infinity, you’ll never experience interruptions in latency.”
Architectural Fit: Two Halves of the Same Problem
Dean described the technical synergy clearly: “Redpanda and Akamai are solving two halves of the same problem—how to run intelligent, real-time applications globally with predictable low latency and high performance computing.”
Akamai’s cloud routes requests to the edge closest to CPUs and GPUs, providing low-latency performance and secure execution environments for agentic workloads. Redpanda complements this by delivering the fastest available streaming technology with consistent, predictable latency. When combined, the result is a platform where real-time streaming workloads run closer to where data is generated—at the edge—rather than requiring round trips to centralized regions.
Use Cases Unlocked
The partnership enables three primary use cases that were previously difficult to execute at scale:
First, agentic AI at the edge. Multi-step agents require repeated low-latency inference calls and continuous context refreshes. With LLMs, GPUs, and real-time data streams all on the same node, developers can build responsive AI systems without latency penalties.
Second, global IoT ingestion and real-time automation. Millions of devices producing event streams need immediate filtering, enrichment, and action. Deploying streaming infrastructure at the edge ensures data is processed where it’s generated.
Third, personalization and customer experience. Serving real-time recommendations and conversational interfaces globally requires eliminating the lag inherent in cross-region data access.
“The headline is simple,” Dean said. “Akamai provides distributed inference and a secure envelope, while Redpanda provides real-time streaming and agentic data governance on their platform. Together, we can run globally distributed, real-time AI systems that are faster and easier to operate at scale.”
Commercial Model and Customer Access
From a go-to-market perspective, the partnership is designed for simplicity. Customers can purchase the solution through Akamai’s cloud platform (Linode) with unified channels and a clear support model. Akamai handles the commercial side, while Redpanda supports the software—reducing complexity for enterprises looking to adopt real-time streaming without managing multiple vendor relationships.
For Redpanda customers, the partnership unlocks access to the largest distributed network in the world, purpose-built for performance, latency, scale, and cost efficiency. For Akamai customers, it provides one of the best-performing real-time streaming technologies available, directly integrated into the platform where they’re already building applications.
Why This Matters Now
As AI workloads become more distributed and agentic applications demand continuous context awareness, the data layer can no longer be an afterthought. The shift from batch processing to real-time streaming, combined with the move from centralized cloud to distributed edge infrastructure, requires a rethink of how data and compute are architected together.
The Akamai-Redpanda partnership represents a clear answer: co-locate high-performance streaming with edge compute, eliminate latency bottlenecks, and give developers the tools to build AI systems that actually perform in real-time, at global scale.





