Guest: John Bradshaw (LinkedIn)
Company: Akamai
Show Name: An Eye on AI
Topics: Cloud Computing
For years, the cloud was sold as a cheaper, simpler alternative to managing data centers. But as AI workloads explode and global applications demand real-time performance, that promise is being rewritten. Cloud costs, GPU scarcity, and data movement are now board-level topics—and enterprises are rethinking what “value” in the cloud really means.
John Bradshaw, Field CTO Cloud, EMEA at Akamai, has seen this shift firsthand. In a conversation with Swapnil Bhartiya, he explained why the traditional model of cloud economics no longer works and how organizations can adapt.
“Cloud can still deliver savings,” Bradshaw noted, “but only if you build the right guardrails and design your applications to move wherever your customers are.” He compares the cloud’s power to a house full of Lego—an amazing toolkit, but one that can quickly turn messy without discipline. Just because compute and storage are on tap doesn’t mean they’re automatically efficient.
AI has accelerated this tension. Training and inference workloads depend heavily on GPU placement and data movement, often across regions and providers. “There isn’t really an on-prem paradigm for AI,” Bradshaw said. “Companies invested in cloud years ago, and now they’re realizing flexibility matters more than ever.” Many are avoiding the single-provider mindset that led to costly lock-in during earlier cloud waves.
The strain on traditional hyperscalers is becoming obvious. The combination of data gravity, latency, and regional constraints is exposing the limits of centralized models. “If you want an accurate model, you need huge amounts of data—and moving that around is expensive,” Bradshaw explained. “Latency still matters, even with big pipes.”
This isn’t just a technical problem—it’s financial and strategic. AI training runs can consume enormous resources, and if workloads sit too far from users, performance and user experience suffer. Bradshaw pointed out that when a model has to process everything from fashion trends to black-hole physics, “you’re paying for a lot of unnecessary intelligence.” Bringing computation closer to where it’s used can reduce costs and latency, especially in multi-region environments.
Reliability has become another major concern. The recent global AWS outage highlighted the fragility of centralized cloud architectures. Bradshaw argued that while outages are inevitable, resilience can be designed in. “Everything fails,” he said. “The goal is for your users never to notice.” His advice: abstract workloads across providers and regions so that failure in one area doesn’t translate into downtime everywhere.
That leads to the question of long-term commitments. Many enterprises still sign multi-year contracts for discounts, but Bradshaw warned that these agreements can create dangerous concentration risk. “It’s fine to bet on a provider you love,” he said, “but you should also budget for an exit plan.” He suggests including a line item for migration—essentially an insurance policy that allows companies to pivot quickly if business or technical needs change.
Looking forward, Bradshaw sees the emergence of “Cloud Economics 2.0.” The first phase was about cost visibility through FinOps; the next will be about value realization through “ValueOps.” This means aligning technology investments with business outcomes—tracking how cloud services directly impact customer acquisition, retention, and delivery. “It’s not just about controlling spend,” he said. “It’s about understanding where the money creates value.”
AI itself will be part of this evolution. Bradshaw believes that “agentic AI” will drive smarter, more dynamic decision-making across operations and finance. For example, systems could automatically optimize cloud capacity ahead of known events—like payroll cycles or regional weather patterns—improving efficiency without manual oversight. “You haven’t bought more ice cream,” he joked, “you’ve just sent it to the right store.”
So what should CIOs do now? Bradshaw’s advice is practical: define three principles—save money, grow the business, or reduce risk—and design around them. “You can’t do everything,” he said. “Focus on what matters most in the next 12 to 24 months.” Above all, avoid lock-in by building flexible, portable architectures that can evolve with your business.
For globally distributed organizations, the winning model blends a strong core with regional edges. Intensive training or processing might happen centrally, but inference and interaction should live as close to users as possible. This hybrid approach balances cost, performance, and resilience—exactly the combination enterprises need as AI reshapes every layer of their stack.
Bradshaw summed it up simply: “Plan to leave before you start.” It’s advice that captures the essence of modern cloud thinking—freedom to move, adapt, and stay in control no matter how technology evolves.





