Cloud Native

How AI is Forcing a Rethink of Cloud Economics and Multi-Cloud Strategy | John Bradshaw, Akamai

0

Guest: John Bradshaw  (LinkedIn)
Company: Akamai
Show Name: An Eye on AI
Topics: Cloud Computing

Cloud has long promised cost savings through elasticity and scale. But the rise of AI workloads is forcing organizations to completely rethink that equation. John Bradshaw, Field CTO Cloud, EMEA at Akamai, cuts through the hype with a stark reality: traditional cloud strategies built for web applications simply don’t work when you’re managing GPU placement, training models, and running inference at the edge.

The New Cloud Economics Reality

The cloud cost savings story isn’t dead, but it requires fundamentally different guardrails in an AI-driven world. Bradshaw points to a critical shift happening across enterprises: teams that rushed to experiment with AI over the past five or six years are now facing executive scrutiny. Budgets were opened, innovation was encouraged, and AI projects proliferated. But ROI hasn’t always materialized as promised.

“We’re now in a situation where that technology is changing remarkably rapidly, but at a business level, people are going, well, I’ve spent all this money, and I’m not sure I’ve seen the return I expected,” Bradshaw explains. This inflection point is forcing organizations to get more strategic about AI infrastructure decisions.

Why Vendor Lock-In is the New Enemy

The mistake many enterprises made during the first cloud migration wave was betting heavily on a single hyperscaler. For AI, that approach is even more dangerous. Different AI models excel at different tasks—one provider might deliver superior code generation while another performs better at natural language tasks. Locking into a single platform means limiting your options just as AI capabilities are evolving fastest.

Bradshaw sees successful customers taking a deliberately flexible approach: “They’re happy to try different things. They don’t want to fix with one hyperscaler, one provider, one service, but have the ability to move that where their business develops and where the technology develops.”

This isn’t just about negotiating leverage or avoiding price increases. It’s about matching workloads to the best available technology at any given moment. Cloud native design principles that enable portability across providers are no longer optional—they’re essential for AI-driven businesses.

Edge Proximity and Agentic AI

Another fundamental shift involves workload placement. Traditional cloud architectures centralized compute in regional data centers. But agentic AI systems that coordinate multi-user, multi-system processes in real time need to operate much closer to decision points.

“The closer you are to that decision point, the quicker, the better that decision is ultimately going to make,” Bradshaw notes. This is pushing AI inference workloads toward edge locations, closer to end users and data sources. It’s a distributed computing model that requires rethinking not just infrastructure placement but also data movement costs, latency requirements, and orchestration strategies.

Building the Right Guardrails

The challenge isn’t just technical—it’s cultural. Bradshaw uses a memorable analogy: unlimited access to cloud resources without built-in discipline is like giving a child unlimited Lego with no expectation to clean up. “Because you’ve got such ubiquitous access to technology, not baking in that ‘go tidy up your room’ behavior creates challenges in making sure you’re doing it in an efficient way.”

For AI workloads, those guardrails need to be automated. Successful organizations design applications that can move flexibly across providers, implement cost controls at the architecture level, and avoid baking dependencies on proprietary services that create future migration headaches.

How CNCF Is Preparing Cloud-Native Projects for AI-First Developers | Alex Chircop

Previous article

How CISOs Protect GenAI: The 3 Security Battlefronts You’re Probably Missing | Akamai

Next article