Guest: Frank Nagle
Company: The Linux Foundation
Show Name: An Eye on AI
Topic: AI Governance
You’re making million-dollar AI decisions—but are you evaluating them correctly? Frank Nagle, Chief Economist at The Linux Foundation, cuts through the noise with a practical framework enterprise technology leaders can apply immediately. The question isn’t whether open or closed AI models will dominate; it’s which approach best fits your specific use case and organizational constraints.
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
The AI model selection debate has become polarized between two extremes. On one side are predictions of monopolistic control by a handful of tech giants. On the other is the belief that commoditized open models will erase all competitive moats. Nagle’s research suggests reality will land somewhere strategically in between—much like the evolution of operating systems over the past three decades.
In recent research, Nagle and his team identified what they call “open model under-utilization” and developed a diagnostic framework specifically for enterprise decision-makers. Presented in Table 2 of their paper, the framework breaks the problem into two primary categories: genuine preferences and information misconceptions. Technology leaders can complete this assessment in as little as 10 minutes to clarify their true positioning.
The first consideration is switching costs. Organizations deeply integrated with proprietary AI solutions often face significant migration friction. The key question becomes whether this lock-in reflects a deliberate strategic choice or simple inertia. Many enterprises discover they are paying premium prices not for superior capabilities, but for the perceived safety of vendor support and liability coverage.
Information friction represents another critical factor. Risk-averse organizations frequently default to closed models without fully assessing whether open alternatives could meet their requirements. Nagle emphasizes that this isn’t about ideology—it’s about aligning technology choices with real business needs. The framework helps leaders distinguish genuine requirements from unfounded assumptions.
Looking ahead five years, Nagle predicts a bifurcation similar to what occurred with operating systems. Linux gradually displaced proprietary systems in server environments where openness, customization, and cost mattered most. Meanwhile, Windows retained dominance on desktops, where ease of use and integrated support proved more valuable. The AI landscape is likely to follow a comparable path.
Specialized use cases such as healthcare may continue to favor closed, proprietary models. These environments prioritize regulatory compliance, dedicated support channels, and clear accountability. By contrast, commodity applications and infrastructure layers are likely to increasingly adopt open models, enabling innovation and profit generation in the layers built on top.
The global dimension adds another layer of complexity. Digital sovereignty concerns are accelerating open model adoption as nations and organizations seek to avoid concentrating AI capabilities in specific geographic regions. This geopolitical pressure reinforces the technical and economic case for open alternatives.
The leaked Google memo stating “we have no moat in AI” reflected genuine concern about the competitive threat posed by open models. However, Nagle’s analysis suggests the outcome will not be total commoditization. Instead, innovation will continue across both open and closed ecosystems, with market segmentation driven by use-case requirements rather than blanket superiority of either approach.
For technology leaders, this means abandoning one-size-fits-all thinking. The organizations that succeed will be those that deliberately match model selection to workload characteristics, internal capabilities, and strategic objectives. Open models will increasingly power commodity functions, while specialized closed models serve high-stakes applications requiring guaranteed support.
The parallel to open source software is instructive. Not all software became free, but open source enabled entirely new business models and dramatically reduced costs at the infrastructure layer. The AI economy is likely to follow a similar pattern, with value accruing to applications and services rather than base models themselves.
Nagle’s framework provides a structured way to navigate these decisions. Technology leaders should assess switching costs, information gaps, risk tolerance, and use-case specifics before defaulting to either open or closed solutions. The next 12 to 24 months will be critical, as organizations move from experimental AI deployments to production-scale implementations that demand clear, durable architectural choices.





