Guest: Frank Nagle
Company: The Linux Foundation
Show Name: An Eye on AI
Topic: AI Governance
If price and performance were the only factors driving AI model adoption, enterprises would save $24.8 billion annually by switching to open alternatives. Instead, they’re paying a premium for closed models from OpenAI, Anthropic, and Google. Why? The answer reveals critical blind spots in how organizations evaluate AI infrastructure—and what it means for the future of enterprise technology.
Frank Nagle, Chief Economist at The Linux Foundation, has quantified what many technology leaders suspect: the AI market isn’t behaving rationally according to traditional economic models. His research identifies a staggering $24.8 billion gap between what organizations could pay for AI capabilities and what they actually spend.
But Nagle rejects the notion that this represents pure irrationality. Instead, he points to a complex web of valid and invalid factors shaping adoption decisions.
Valid Business Concerns Driving Premiums
The valid reasons center on operational realities that benchmark tests can’t measure. Organizations value having someone to call when systems fail. They need guaranteed uptime and established support channels. These soft costs—what Nagle calls “the ability to have somebody to call if something goes wrong”—carry real business value that raw performance metrics miss entirely.
Switching costs represent another rational barrier. Early adopters of closed models have built infrastructure, workflows, and expertise around specific platforms. Migrating to open alternatives requires upfront investment in retraining, integration work, and potential service disruption. Even when the long-term economics favor open models, the short-term friction keeps organizations locked in.
Nagle warns against short-term thinking on this point. “Don’t only think about the switching costs today,” he emphasizes. “Think about what happens if one of these closed source companies goes bankrupt and disappears tomorrow. You’re going to be forced to transfer over to something else, and you won’t have control over the timing.”
Invalid Assumptions and Misconceptions
On the flip side, some of that $25 billion stems from misunderstandings about open models. Organizations worry their proprietary data will become publicly available—a misconception about how open weights models actually work. These fears, while unfounded, still drive real purchasing decisions.
The Open Weights Middle Ground
The AI landscape introduces a category that didn’t exist in traditional open source: open weights models. Meta’s Llama exemplifies this approach—internally developed, then released with accessible weights for free use, but without the community contribution model that defined Linux or Kubernetes.
“That’s a little bit different,” Nagle explains, “because you don’t have the community kind of giving back to it and creating what we ended up with Linux and other open source projects.”
This creates strategic uncertainty. Some AI models meet the Open Source Initiative’s full definition—open code, open weights, and open training data. Others occupy the middle ground. The long-term implications of this split remain unclear.
Learning from Open Source History
Nagle draws parallels to open source software adoption, which followed a similar arc. Early skepticism gave way to widespread adoption as organizations became educated about the technology. Today, Microsoft Azure runs on Linux. Windows installations include Linux components. The question isn’t whether open models will gain ground, but how long the transition takes and where the dividing lines ultimately fall.
“As people come to understand all those things, they start shifting to open more, at least for some contexts,” Nagle notes. The enterprise AI market appears to be in the early education phase—where Linux was twenty years ago.
For technology leaders, the implications are clear: evaluate not just today’s costs, but the strategic flexibility and long-term risk profile of your AI infrastructure choices. The $25 billion inefficiency exists partly because those considerations don’t fit neatly into price and performance spreadsheets.





