Every enterprise is racing to deploy AI agents. The bottleneck isn’t the model — it’s the data layer. Agents that can’t access fresh, real-time data from multiple sources at low latency will not perform in production.
The Guest: Prenil Kottayankandy, Director of Business Development at Akamai
The Guest: Zeke Dean, Senior Partner Solutions Engineer at Redpanda
The Bottom Line:
- The Akamai x Redpanda partnership delivers a unified infrastructure stack — edge inferencing plus a real-time streaming data plane — that enterprises need to deploy production-grade agentic applications
- Redpanda acts as the connector layer across 50+ data sources, eliminating the complexity agents would otherwise have to resolve themselves
- Akamai’s edge compute runs inferencing workloads closer to users, while Redpanda places streaming data right where compute lives — eliminating the latency gap that makes most agentic architectures impractical at scale
Speaking with TFiR, Prenil Kottayankandy of Akamai defined the current state of real-time streaming infrastructure for enterprises building agentic AI applications — and why the timing of the Akamai and Redpanda partnership is particularly significant.
WHAT DOES REAL-TIME STREAMING UNLOCK FOR AGENTIC AI?
The core business opportunity Prenil describes is architectural. Enterprises that previously dismissed streaming data as a niche infrastructure concern are now confronting it as a hard requirement for any AI agent deployment. Agentic applications by definition require agents to query multiple data sources dynamically and return results with low latency. Without a purpose-built streaming layer, that coordination creates compounding complexity that teams must solve manually — or not at all.
“The timing is perfect because now there is an explosion of applications that need access to real-time data and streaming data. Everyone is talking about building applications that use AI, or building agents that talk to each other, to humans, to different data layers, and provide that information back to users.”
Prenil frames the partnership as infrastructure unlocking that was not available in the market before. Redpanda’s data plane connects to 50+ backend sources and presents a unified queryable layer to any agent running on the platform — abstracting away the source complexity entirely.
“This partnership really unlocks the underlying layers needed for those kinds of applications. Think about any large enterprise customer currently thinking, ‘I need to build an agentic application.’ That agent needs to talk to various different data sources, which introduces significant complexity. They also need to figure out where to run those agents so they respond to users in a high-performance way.”
EDGE INFERENCING + STREAMING: WHY CO-LOCATION MATTERS
The second dimension of this clip’s argument is physical proximity. Akamai’s inference cloud is designed to run AI inferencing at the edge — closer to users, not in a centralized hyperscaler region. Redpanda is then deployed at the same edge location, so data is already co-located with compute. The result is that the data retrieval step — often the dominant source of latency in agentic pipelines — is compressed dramatically.
“Now you have the ability to build out inferencing workloads or latency-sensitive workloads right at the edge. With the data plane that Zeke spoke about, you have the data sitting right where the compute is, and Redpanda takes on the task of connecting to 50 different sources behind the scenes and providing a unified layer you can query in real time.”
BROADER CONTEXT: WHAT THE FULL INTERVIEW REVEALS
This clip is drawn from a broader TFiR discussion in which Prenil and Zeke Dean, Senior Partner Solutions Engineer at Redpanda, unpack the full scope of the Akamai Qualified Compute Partner Program and the strategic rationale behind Redpanda joining it. The QCP program is a highly curated program of approximately 30 partners serving 6,000+ enterprise customers and over 150,000 SMB customers on the Akamai platform.
Zeke described the architectural fit explicitly: Akamai brings globally distributed compute and edge reach, while Redpanda brings a real-time streaming backbone fully compatible with the Apache Kafka API — re-architected to be simpler to operate, faster under load, and more reliable at scale.
The economics are equally significant. Akamai does not penalize customers for egress costs at the edge and compute layers — a structural advantage that removes a major barrier to building distributed, data-intensive applications. Combine that with Redpanda’s per-node efficiency gains, and the price-to-performance ratio represents a genuinely new option in the market versus hyperscalers or alternative Kafka providers.
Watch the full TFiR interview with Prenil Kottayankandy and Zeke Dean here





