Guest: Simone Morellato (LinkedIn)
Company: vCluster Labs
Show Name: An Eye on AI
Topics: Kubernetes, Cloud Native
AI isn’t just the latest buzzword making the rounds at KubeCon. It’s forcing fundamental architectural changes to Kubernetes itself. Simone Morellato, Customer Success Lead at vCluster, explains why GPU workloads represent a genuine technological shift that’s reinvigorating innovation in core Kubernetes components.
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
The GPU Challenge Kubernetes Wasn’t Built For
Kubernetes has weathered many trend cycles. Multi-cloud promises came and went. Networking challenges were solved. But AI workloads present something different—a fundamental mismatch between how Kubernetes was designed and what GPUs actually need.
“Kubernetes was not really built to support GPU workloads, just because the GPUs are built differently, very differently from CPUs,” Morellato explains. This isn’t about bolting on another feature or adding a new label to existing technology. GPU architectures require different scheduling approaches, resource allocation strategies, and orchestration patterns than the CPU-focused design Kubernetes was built around.
Real Innovation, Not Just Buzzwords
What separates the AI wave from previous Kubernetes trends is the depth of technical change required. Morellato notes that companies are innovating on the core Kubernetes scheduler itself, along with surrounding components, to properly support GPU workloads. This represents genuine technological evolution rather than superficial rebranding.
Previous themes like multi-cloud were more about deployment patterns than fundamental architectural changes. Kubernetes was already built to be cloud-agnostic. Networking challenges, while complex, could be addressed with additional tooling and best practices. Those problems are now largely solved.
Why This Matters for Decision-Makers
AI workloads are bringing necessary challenges that push Kubernetes technology forward. For organizations investing in AI infrastructure, this means understanding that GPU orchestration isn’t just regular Kubernetes with different hardware attached. The scheduler, resource management, and workload distribution all need rethinking.
vCluster and other platforms in the ecosystem are adapting to support these new requirements. The innovations happening now will determine how effectively Kubernetes can serve as the foundation for AI infrastructure over the next several years.
This technical evolution signals maturity in the cloud-native space. After years of solving deployment and networking patterns, the community is now tackling fundamental compute architecture challenges. That’s good news for anyone building AI infrastructure on Kubernetes—the technology is evolving to meet the actual requirements, not just the marketing hype.





