AI Infrastructure

Enterprises Leave $24.8B on Table by Choosing Closed AI Models Over Open Alternatives

0

Guest: Frank Nagle
Company: The Linux Foundation
Show Name: An Eye on AI
Topics: AI Governance, Data Sovereignty

What if your AI strategy is costing you six times more than it should? New research from The Linux Foundation reveals enterprises could collectively save $24.8 billion by switching to open AI models that match or exceed the performance of their closed counterparts. Frank Nagle, Chief Economist at The Linux Foundation, explains why 80% of users still choose expensive closed models despite superior open alternatives becoming available within weeks.

The research, based on data from OpenRouter—a gateway platform for LLM inference—analyzed approximately 1% of the overall LLM API economy. What Nagle and his team discovered challenges conventional assumptions about AI model selection.

“About 80% of the usage that’s seen on OpenRouter are closed models, despite these models having much higher prices—about six times the price of open models on average,” Nagle explains. Even more striking, these closed models offer only modest performance advantages that disappear quickly.

The Performance Gap Is Closing Fast

The research reveals that open models now catch up to closed alternatives in just three to four weeks. “That rate of catch up is actually getting faster, to the point where open models are catching up to closed models almost within three or four weeks,” Nagle notes.

Using various industry benchmarks for LLM performance, the research team found that while closed models like GPT-4 often lead initially, open models like Meta’s Llama series quickly reach comparable or superior performance levels. This rapid convergence raises a critical question for enterprise decision-makers: why pay premium prices for marginal, temporary advantages?

The $24.8 Billion Question

If open models perform similarly at a fraction of the cost, why do enterprises continue choosing closed alternatives? Nagle identifies several factors driving this paradox.

Switching costs play a significant role. Organizations that initially adopted GPT-series models face technical and organizational friction when considering alternatives. “Say you start using the GPT series and you’re paying OpenAI whatever it is that they’re charging, and then newer, better models come around, but it’s hard for you to swap away from that,” Nagle explains.

Security and liability concerns also factor heavily. Some decision-makers worry about open models from certain regions, carrying “geopolitical baggage.” Others express familiar concerns from the early days of open source software: “If I’m using an open model, if something goes wrong, who am I going to sue or who am I going to call for customer support?”

Information frictions matter too. Closed models benefit from superior marketing and mindshare, while comparable open alternatives remain less visible to decision-makers.

The Path Forward

The research doesn’t advocate abandoning closed models entirely. Organizations needing cutting-edge performance for specific use cases may justify the premium. However, Nagle’s team calculated that if companies using mid-tier closed models switched to better-performing open alternatives, the collective savings would reach $24.8 billion.

For CIOs and CTOs, this research suggests a framework for AI investment decisions. Evaluate whether your use cases truly require the latest closed models or whether open alternatives—updated monthly—can deliver comparable results at lower costs. Factor in total cost of ownership, not just model performance. And stay informed about the rapidly evolving open model landscape.

The Linux Foundation team plans to conduct surveys to further understand decision-making around model selection. But the current findings are clear: the AI economy contains substantial latent value in open models that enterprises aren’t fully capturing.

As AI becomes essential infrastructure across industries—following the trajectory of software, cloud, and containerization before it—optimizing these investments matters more than ever. The question isn’t whether to use AI, but whether you’re using it as cost-effectively as possible.

AI Observability Noise: Why Kubernetes Plus AI Creates Chaos, Says Akamai’s Danielle Cook

Previous article