AI Infrastructure

The OpenStack Moment for AI: What No One Tells You About MCP | Randy Bias, Mirantis

0

Guest: Randy Bias (LinkedIn)
Company: Mirantis
Show Name: An Eye on AI
Topic: AI Infrastructure

When Anthropic and OpenAI announced the Agentic AI Foundation (AAIF), most observers saw just another AI collaboration. Randy Bias, VP of Strategy & Technology at Mirantis, recognized something far more strategic: a community-first power play that will cement MCP as the de facto standard for agentic AI—exactly as OpenStack did for infrastructure-as-a-service over a decade ago.

The Strategic Parallel: From OpenStack to MCP

Before OpenStack emerged, the open source infrastructure landscape was fragmented across Eucalyptus, CloudStack, and OpenNebula. Each offered technical capabilities, but none prioritized ecosystem building. When OpenStack launched with a community-first approach, it fundamentally changed the game.

“Most people in open source understand that it’s less about the code and more about the community,” Bias explains. “It’s more about the people, it’s more about the ecosystem.” That philosophy drove OpenStack to become the dominant open infrastructure standard, effectively displacing all competitors.

The Agentic AI Foundation announcement follows the same playbook. By bringing together Anthropic, OpenAI, and other key players around a shared protocol for agent-to-agent and agent-to-tool communication, the foundation aims to “suck the oxygen out of the room” for competing approaches. Just as CloudStack was sold to Citrix and Eucalyptus faded after OpenStack’s rise, alternative agent communication protocols face an uphill battle against MCP’s growing ecosystem momentum.

Why Consumer LLM Conversations Miss the Enterprise Opportunity

While consumer discussions remain fixated on frontier large language models, enterprise teams have already shifted focus to agentic AI. The distinction matters enormously for competitive advantage.

Basic LLM usage—even with retrieval-augmented generation and chatbots—doesn’t provide differentiation. Every competitor has access to the same frontier models and can implement similar surface-level AI features. Real value emerges when AI agents connect to proprietary, mission-critical business data that companies cannot and will not send to external services.

Bias points to concrete examples: “If you’re a healthcare business, it’s going to be connected to electronic healthcare records and data that you don’t necessarily want to send outside your four walls.” Financial services organizations running high-frequency trading platforms face even stricter constraints. “There’s no way that high frequency trading platforms are taking their algorithms and basically shoving them out to any of the frontier models. They just aren’t going to trust that.”

The Agentic Maturity Model and On-Prem Deployment

Mirantis has developed an internal agentic maturity model to assess where organizations stand in their AI journey. The assessment reveals that most enterprises remain at a very early stage, experimenting with frontier models and basic RAG implementations.

The next competitive frontier requires deploying agents on-premises, connected to highly sensitive data and running on local open source LLMs. These models continue improving rapidly, making on-prem deployment increasingly viable for production workloads. This architecture allows organizations to leverage their unique data assets while maintaining security and control.

“The huge value is going to come where you’re going to get competitive advantage against your rivals,” Bias emphasizes. “It’s going to be the first movers who adopt AI internally, adopt agentic workflows internally, and apply it with their specific data for their context and their business.”

The First-Mover Imperative

The competitive dynamics of agentic AI adoption create a winner-take-most scenario. Organizations that successfully deploy agents against their proprietary data gain compound advantages. They refine workflows faster, extract insights competitors cannot access, and build institutional knowledge around AI-augmented processes.

Those who delay face an increasingly difficult catch-up game. “Those who do that first will be the ones that win, and those who do it last will be the ones that are impacted,” Bias warns.

The Agentic AI Foundation accelerates this timeline by providing standardized infrastructure for agent communication. Rather than building custom integration layers, enterprises can focus on deploying agents that leverage their unique data and workflows. The ecosystem forming around MCP—much like the ecosystem that formed around OpenStack—will drive rapid innovation and reduce time-to-value.

For CTOs and infrastructure leaders, the message is clear: the window for strategic advantage through agentic AI is open now, but it won’t remain open indefinitely. Organizations must move beyond experimentation with frontier models and begin the harder work of deploying agents against their most valuable data assets. The foundation provides the standardized communication layer needed to build those systems at scale.

vCluster and NVIDIA Partnership Solves GPU Utilization Crisis for AI Infrastructure

Previous article

Why Your SOC Analysts Need AI Prompting Skills Now | Steve Winterfeld, Akamai

Next article