AI may be the hottest technology story of our time, but its foundations remain unstable. At the Open Source Summit in Denver, Steve Watt, Field CTO at Red Hat, made a compelling case for why open source must define the core of AI—just as Linux did for operating systems and Kubernetes did for cloud-native infrastructure.
Defining Open Source AI
Watt framed open source AI in simple terms: “open source model weights surrounded by open source software components.” That definition already has real-world examples. Red Hat backs vLLM, one of the most widely used inference engines, while IBM Research contributes Granite large language models. With Red Hat OpenShift AI, enterprises can train, fine-tune, and deploy models across hybrid cloud and even edge environments where resources are tight.
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
Collaboration, Hardware, and Portability
Red Hat’s unique position inside IBM offers advantages that go beyond channel reach. Watt described how his team works closely with IBM Research to test large-scale GPU clusters and adapt those learnings into open source projects like InstructLab and vLLM. Hardware abstraction remains a grand challenge, and Red Hat has invested in projects like Triton, which enables GPU-agnostic kernels across AMD and NVIDIA. “It’s a software portability story,” Watt said, comparing PyTorch’s ecosystem to Linux and cloud-native’s rise.
The Agentic AI Shift
Perhaps the most urgent transformation is agentic AI, where machines talk to machines at scale. Human patience is measured in milliseconds, but agents can multiply demand by orders of magnitude. That’s why Red Hat developed LLMD, a distributed inference system designed to handle agent-to-agent workloads. As Watt explained, “How do you increase demand but still maintain throughput and efficiency? That’s the problem agentic AI introduces.”
Stability in a Time of Creative Destruction
The pace of AI innovation has created churn in open source projects, raising questions about sustainability. Watt pointed to PyTorch and vLLM as anchors with healthy governance, while stressing the need for standards to avoid fragmentation. “What feels unprecedented right now is the amount of creative destruction in AI,” he said. Without stable foundations, it’s difficult for enterprises to invest or for open source companies to build sustainable business models.
For Watt, the search for the “Linux of AI” is still underway. Whether it emerges from inference servers, training frameworks, or hardware abstraction layers, the goal is clear: create a reliable, open foundation that developers, enterprises, and communities can build on for decades to come.





