AI Infrastructure

The AI Network Wall: Why You’re Wasting Billions on Idle GPUs | Drew Perkins & Omar Hassen, Eridu | TFiR

0

AI data centers are burning through billions of dollars in GPU investments, but a hidden bottleneck is severely throttling their true potential. The industry’s obsession with compute capacity has outpaced the capabilities of traditional networking infrastructure. This disconnect creates a “network wall” that restricts data flow, drives up latency, and leaves many organizations settling for less than 50% GPU utilization.

Just as a sprawling metropolis cannot function without adequate highways, an AI supercomputer cannot deliver on its promise if the network cannot feed data to its processors. Bridging this gap requires a clean-sheet approach to data center networking.

The Guests:

Drew Perkins, CEO and Co-Founder at Eridu

Omar Hassen, CPO and Co-Founder at Eridu

Key Takeaways

  • GPUs are scaling far faster than traditional ethernet switch chips, causing critical data ingestion bottlenecks in AI data centers.
  • Legacy networking’s historical standard of doubling capacity every node (2X) is insufficient for modern AI models requiring 10X growth.
  • Eridu’s clean-sheet design utilizes higher radix, high-port-count switches to create a flatter, lower-latency network architecture.
  • By optimizing network architecture, organizations can reduce network power consumption by 70% and cut overall network costs by 40%.
  • Improving network efficiency directly increases the amount of available power for compute, enabling the generation of more tokens and unlocking higher economic value.

***

Read Full Transcript & Technical Deep Dive

Where Should CISOs Start with MITRE? Steve Winterfeld’s Practical Entry Point | Akamai | TFiR

Previous article