Cloud Native

Egress Costs Are Killing Distributed Streaming Apps—Here’s the Fix | Prenil Kottayankandy, Akamai | TFiR

0

Guest: Prenil Kottayankandy
Company: Akamai
Show Name: Cloud: Evolution
Topic: Edge Computing

Real-time streaming architectures require data to move fast, be processed close to users, and scale globally. But hyperscaler economics make this expensive and architecturally painful. Egress fees penalize distributed designs. Centralized compute creates latency bottlenecks. And the price-to-performance ratio forces engineering teams into compromises that slow innovation. For latency-sensitive workloads like agentic AI, IoT automation, and real-time personalization, these tradeoffs aren’t acceptable—they’re deal breakers.

In a recent discussion, Prenil Kottayankandy, Director of Business Development at Akamai, explained why running high-performance streaming infrastructure on distributed edge platforms eliminates these traditional constraints—and how the combination of per-node efficiency and network-layer proximity creates a fundamentally different equation for real-time data workloads.

The Core Problem: Hyperscalers Weren’t Built for This

Traditional cloud platforms optimize for centralized compute and penalize data movement. For streaming workloads that need to run close to users and move data between regions or availability zones, this creates immediate friction. “How do you define those architectures, how to design it so that your costs for egress don’t go very high?” Kottayankandy asked. “People are reluctant to build many distributed applications because of those egress problems.”

The result is architectural compromise. Teams centralize workloads to avoid egress fees, accepting higher latency. They over-provision infrastructure to compensate for inefficiencies, driving up costs. And they delay or abandon distributed designs that would better serve their use cases.

The Edge Solution: Efficiency at Two Layers

Kottayankandy described the solution as combining efficiency gains at two critical layers: the infrastructure layer and the application layer.

At the infrastructure layer, Akamai’s distributed edge platform delivers compute proximity. “We have over 25 years of building large distributed networks. We are the largest distributed platform for compute out there, and we drive efficiencies by being really close to users, by driving efficiencies on the network layer,” he explained. This proximity eliminates the round-trip latency inherent in centralized architectures and reduces the need for data to traverse expensive inter-region links.

At the application layer, Redpanda’s streaming platform delivers per-node efficiency. “The beauty of the Redpanda solution is in the way it drives efficiencies on a per node basis, on a per customer, on a per machine basis—you’re getting the biggest bang for your buck,” Kottayankandy noted. “Because there are those efficiency gains, and because of the power of the platform, customers are seeing better performance, better throughput, better latency, and because of those efficiencies, there’s those cost benefits.”

When combined, the result is multiplicative. “On the network layer, you’re getting the biggest performance boost and the latency boost and the throughput boost by just being really close to users. And guess what? You’re running Redpanda there, which is giving you the best performance efficiency gains on a per node, per machine, per cluster basis,” he said. “You combine both of these, and you really see a one plus one equals three situation for customers.”

Price-to-Performance That Doesn’t Exist Elsewhere

The architectural fit translates to a price-to-performance ratio that Kottayankandy argues is unavailable in the market today. “You get a service which doesn’t exist in the market today, especially at the price-to-performance ratio compared to any other alternatives—be it running on hyperscalers or using alternative Kafka providers out there.”

The comparison isn’t theoretical. Akamai has tested Redpanda internally and seen measurable gains in total cost of ownership. For customers evaluating streaming platforms, the combination of edge proximity, per-node efficiency, and elimination of egress penalties creates a fundamentally different cost structure than traditional hyperscaler deployments.

Eliminating Architectural Friction

Beyond performance and cost, the partnership addresses operational friction that slows adoption. “There is the barrier to entry, or the friction on the infrastructure side. How do you procure the solution? How do you get visibility? How do you bill for it? How do you pay for it?” Kottayankandy explained. “We’re trying to remove all of these friction based on this partnership, making it super simple—using the same ways that customers are already used to procuring, billing, paying for these things through Akamai.”

This unified commercial model reduces complexity for enterprises that want to adopt real-time streaming without managing multiple vendor relationships or navigating separate procurement processes.

No Egress Penalties: Unlocking Distributed Architectures

Perhaps the most significant architectural unlock is Akamai’s approach to egress costs. “We don’t penalize people for egress costs from the edge and the compute layers,” Kottayankandy said. “We are helping people and developers build more distributed applications, real-time streaming applications, more effectively, without the compromise of cost and being worried about those adding and building costs.”

This changes the design conversation. Instead of optimizing architectures to minimize egress fees—often at the expense of performance and user experience—developers can design for proximity, responsiveness, and global distribution without cost penalties.

Why This Matters for Real-Time Workloads

For agentic AI, IoT automation, and real-time personalization, the ability to process data close to where it’s generated—without egress penalties or centralized bottlenecks—is essential. These workloads require continuous context refresh, low-latency inference, and globally consistent state. Centralized architectures introduce round-trip delays that break the user experience. Edge architectures with inefficient streaming platforms drive up costs.

The Akamai-Redpanda partnership addresses both problems simultaneously: high-performance streaming infrastructure, deployed at the edge, with no egress penalties and a price-to-performance ratio that doesn’t exist on hyperscalers.

Market Response and Customer Interest

Kottayankandy noted strong market interest. “We made a lot of noise about this in this field, and we have a lot of customers and even various enterprise C-suite teams really excited about this partnership.”

For enterprises evaluating real-time streaming platforms, the combination of edge proximity, per-node efficiency, unified procurement, and elimination of egress costs represents a clear architectural alternative to traditional hyperscaler deployments—one that’s purpose-built for the latency-sensitive, globally distributed workloads that define the next generation of enterprise applications.

AI Pilots Are Failing Everywhere. Scott Morgan of Marlabs Says Governance Is the Fix | TFiR

Previous article