Cloud Native

Why Cloud Foundry Still Powers AI Workloads in the Kubernetes Era | Ram Iyengar

0

Guest: Ram Iyengar
Organization: Cloud Foundry Foundation (CFF)
Show: The Source
Topics: Kubernetes, Cloud Foundry

Cloud Foundry doesn’t just survive technology waves—it adapts to them. While Kubernetes dominates conversations about container orchestration, Cloud Foundry quietly enables enterprises to run AI workloads, legacy systems, and modern applications side by side. Ram Iyengar, Chief Evangelist at Cloud Foundry Foundation (CFF), explains how the platform’s developer-first approach and advanced scheduling make it uniquely suited for AI infrastructure—and why prompt-to-production might be closer than we think.

Cloud Foundry’s Timeless Developer Experience

Cloud Foundry has always aimed to be the substrate between developers and infrastructure. “The code you’ve written runs the way you meant for it to—and bonus, it does it in an open source way,” Iyengar explains. That magic hasn’t changed in 10 years. What has changed is the infrastructure underneath and the workloads on top.

At KubeCon Atlanta, Iyengar observed that every organization runs diverse workloads—Kubernetes, VMs, bare metal, mainframes. “Nobody is running Kubernetes in isolation,” he notes. Cloud Foundry fits into this reality by running alongside Kubernetes, not against it. The two platforms cross-pollinate ideas and best practices. Cloud Foundry’s scheduler, for instance, has long handled both long-running and bursty workloads—capabilities Kubernetes projects like Volcano are now adding.

Meeting the AI Wave

AI workloads are just another evolution Cloud Foundry is designed to handle. The platform has two distinct pieces: the runtime that executes workloads, and the service broker that provides dependencies. “Data and LLMs are offered through services. Training and inference run as workloads,” Iyengar says. “It’s a very simple fit.”

VMware and Tanzu already offer commercial Cloud Foundry solutions for AI infrastructure, with customers deploying inference and training workloads in production. The platform’s scheduler efficiently manages GPU-based infrastructure, a critical requirement for AI applications. “If you went to somebody with a paper and pencil and said ‘architect your AI infrastructure,’ everybody will converge to a version of Cloud Foundry at some point,” Iyengar argues. The core is a scheduler managing diverse workloads—exactly what Cloud Foundry has always been.

From Prompt to Production

Iyengar envisions a future where developers prompt an AI model, which generates code, and Cloud Foundry deploys it instantly. “Cloud Foundry already does half the job,” he explains. “There’s only the question of generating the right kind of code for Cloud Foundry to deploy.”

This vision addresses a gap in current generative AI workflows. Developers get code from a model, then manually review it, add configurations, manage version control, and deploy. “There should be a much smoother way where you prompt and that becomes code, and that gets deployed,” Iyengar says. Cloud Foundry community members have already built an MCP (Model Context Protocol) integration—users can instruct a prompt to deploy directly to Cloud Foundry without running the iconic “cf push” command.

Resilience Through Evolution

Cloud Foundry’s survival through the Docker, Kubernetes, and now AI eras comes down to solving persistent problems: getting code from developers to production reliably. “We tend to rediscover a set of problems in different contexts,” Iyengar observes. “We did it for the cloud, then for containers, then for Kubernetes, and now for AI.”

Chris Aniszczyk noted in his KubeCon keynote that Cloud Foundry was one of the earliest platforms in the room when Kubernetes and cloud-native kicked off. That early influence shaped many cloud-native tools and patterns. But Iyengar doesn’t claim Cloud Foundry is the only way forward. “There’s always some nice things across the board,” he says, acknowledging the value of diverse approaches.

For enterprises managing AI workloads, legacy systems, and Kubernetes clusters simultaneously, Cloud Foundry offers a proven path. Its focus on developer experience, advanced scheduling, and open-source flexibility position it as infrastructure for both today’s workloads and tomorrow’s prompt-to-production reality.

Why Hyperscale AI Is Burning Your Budget and How Edge Computing Fixes It | John Bradshaw

Previous article

What Real-Time AI Makes Possible at the Edge | Ari Weil, Akamai

Next article