Cloud Native

Plan Your Cloud Exit Before You Start: Cost Intelligence Strategy from Akamai

0

Guest: John Bradshaw  (LinkedIn)
Company: Akamai
Show Name: An Eye on AI
Topic: Cloud Native

What if the key to a successful cloud strategy isn’t just about what you build, but how easily you can leave? John Bradshaw, Field CTO Cloud, EMEA at Akamai, shares a counterintuitive principle that’s reshaping how enterprises approach cloud architecture: “Plan to leave before you plan to start.”

In a recent conversation with TFiR, Bradshaw tackled the elevator pitch question every CIO faces—how do you build cost intelligence into your cloud strategy without getting trapped by vendor lock-in or runaway expenses?

Start with Three Core Principles

Bradshaw’s advice cuts through the noise of cloud deployment trends. Don’t adopt technology because everyone else is doing it. Instead, identify three fundamental values that align with your business objectives. Are you focused on cost reduction? Revenue growth? Risk mitigation?

“Every CEO wants to do all of those things, but we all know that you can’t do everything all of the time,” Bradshaw explains. “You’ve got to focus or you go everywhere.”

This strategic clarity becomes your north star. Before diving into technical details like data compression or tiered storage optimization, establish guardrails that address what actually matters to your business over the next 12 to 24 months.

The Flexibility Imperative

The second critical element is architecting for portability from day one. Bradshaw emphasizes building solutions that aren’t locked into any single service provider. This flexibility allows your infrastructure to evolve with your business rather than constraining it.

This principle becomes especially relevant for globally distributed applications across regions like EMEA. Bradshaw sees a clear pattern emerging—a core-and-edge architecture that mirrors traditional network design principles.

Core and Edge: The New Cloud Architecture

Organizations are growing concerned about costs from centralized cloud and AI providers. The ability to experiment with different models and roll your own versions is becoming critical. The solution? Bring workloads into the center for processing and training, then distribute the output as close to users as possible.

“You get a ginormous cluster of GPUs for eight hours to process your model once a week or a month,” Bradshaw notes. “You’re not sat there with all of these running all of the time or burning tokens because you’ve imported 50,000 files.”

This approach delivers both performance and efficiency. You achieve economies of scale for compute-intensive tasks while keeping operational costs manageable through distributed edge delivery.

Why This Matters Now

As AI workloads proliferate and cloud costs continue rising, the organizations that maintain control over their architectural destiny will have a competitive advantage. Bradshaw’s “plan to leave” philosophy isn’t about distrust—it’s about maintaining leverage and options in a rapidly evolving technology landscape.

Some cloud projects will succeed. Others won’t. Having all your eggs in one basket creates risk no matter how good the provider. For enterprise leaders designing modern cloud strategies, flexibility isn’t optional—it’s the foundation of resilience.

Cloud cost intelligence starts with strategic clarity, demands architectural flexibility, and requires thinking about your exit before your entrance.

Why GPU Workloads Are Forcing Kubernetes to Evolve Beyond Its Original Design | Simone Morellato

Previous article

AI Threats Outpace Security Teams: Vanta’s Jeremy Epling on Closing the Trust Gap

Next article