Guest: Frank Nagle
Company: The Linux Foundation
Show Name: An Eye on AI
Topics: AI Governance, Data Sovereignty
AI spending has surged into a global arms race, but most organizations still default to the most expensive path: closed, proprietary models. A new working paper from the Linux Foundation reveals a surprising truth — many enterprises may be leaving tens of billions of dollars in value untouched simply by overlooking open models. In this conversation, Frank Nagle, Chief Economist at The Linux Foundation, breaks down the real economics behind today’s AI decisions and what CIOs and CTOs should rethink over the next two years.
If there is one pattern that has repeated itself in every major technology shift — from operating systems to virtualization to cloud — it is that open ecosystems quietly create massive value long before the market fully recognizes it. According to Frank Nagle, open models in AI are following the same trajectory, and the industry is only beginning to understand the long-term implications.
Nagle’s new working paper, “The Latent Role of Open Models in the AI Economy,” attempts to quantify what has so far been anecdotal: the hidden value that open models generate and the economic inefficiencies created when enterprises default to closed systems without examining the alternatives. Using usage data from OpenRouter, a platform that exposes model selection patterns across developers, Nagle and his team found that roughly 80 percent of model consumption still goes to closed models. This is true even though open models are often far cheaper and increasingly comparable in performance.
The numbers reveal the gap clearly. When comparing prices across the models used on OpenRouter, Nagle found that closed models cost roughly six times more than open models on average. Performance differences exist, but the gap is shrinking quickly. “Open models catch up to closed models within three or four weeks,” he noted, pointing to the rapid improvement cycles driven by community innovation and transparent evaluation.
Yet despite the price gap and fast performance convergence, enterprises still heavily favor closed systems. Nagle highlights switching costs as a major factor. Teams that have already built workflows or integrations around a specific closed provider often find it too costly — or too risky — to switch. “We see this in every technology wave. Once organizations lock into a vendor, they don’t move without a very compelling reason,” he explained.
Misconceptions also play a role. A common assumption is that using an open model means exposing corporate data. Nagle pushed back on this strongly. When organizations run open models on their own infrastructure, data never leaves the environment. It can actually be more private and controllable than sending prompts to proprietary services hosted on the public cloud. “If that’s the reason companies avoid open models, that’s simply misinformation,” he said.
Another driver is perceived supportability. With a closed provider, there is a “throat to choke” when something goes wrong — a comfort many CIOs still prioritize. Today, the open model ecosystem lacks standardized support structures, though the demand for support is growing as more enterprises evaluate long-term AI strategies that balance cost, flexibility, and risk.
To bridge the gap between cost savings and decision-making, Nagle’s research estimates how much value could be unlocked if organizations replaced lower-performing closed models with better-performing open models. The result: a measurable $24.8 billion in unused economic value. He expects the real number to be much higher once full enterprise workloads are considered.
For organizations experimenting with AI today, Nagle recommends a simple, phased approach. First, evaluate which use cases genuinely need top-tier closed models and which ones tolerate slightly lower performance. Second, educate internal teams on the differences between model types — cost, support, training data, and fine-tuning flexibility. Third, start small with controlled experiments to compare quality and cost directly in production-like workflows.
Fine-tuning is a particularly strong advantage of open models. Organizations can tailor an open model to domain-specific vocabulary, compliance rules, or internal tone. Closed models allow some customization, but not to the same depth. Nagle shared the example of African governments that were refining open models to support local languages that most commercial closed models do not yet support.
This level of customizability is essential for enterprises pursuing digital and AI sovereignty — another theme Nagle believes will accelerate the open-model movement. As more countries and industries seek to control where their data lives and how models behave, running open models in sovereign environments will become increasingly attractive.
Looking ahead five years, Nagle expects a hybrid future — not an all-open or all-closed world. He compares it to the evolution of operating systems. Linux dominates servers and cloud workloads because of its flexibility and economics, while closed systems still thrive in end-user experiences. Similarly, AI workloads will split: open models powering flexible, cost-efficient enterprise use cases, and specialized closed models dominating areas where regulatory risk, reliability guarantees, and commercial support matter more.
Ultimately, the choice between open and closed models becomes a strategic business decision, not just a technical one. And according to Nagle, understanding that distinction early will determine who captures the next wave of AI-driven economic value.





