AI Infrastructure

Why the Agentic AI Foundation Matters: Randy Bias on MCP, Open Ecosystems, and the Future of Agentic AI

0

Guest: Randy Bias (LinkedIn)
Company: Mirantis
Show Name: An Eye on AI
Topic: AI Infrastructure

The Linux Foundation’s launch of the Agentic AI Foundation marks one of the most significant moves in the rapidly evolving AI landscape. At a moment when enterprises are trying to move beyond chatbots and simple RAG deployments, this foundation introduces a path toward shared standards, safer adoption, and an open ecosystem for agentic AI. In this conversation, Randy Bias, VP of Strategy & Technology at Mirantis, breaks down why this announcement matters far beyond the headline.

Agentic AI has moved from hype to headline—and now, with the Linux Foundation announcing the Agentic AI Foundation, it finally has an institutional home designed to support long-term stability, interoperability, and real community momentum. For Randy Bias, who has spent decades working at the intersection of open source, cloud, infrastructure, and platform engineering, the move represents something deeper: an inflection point similar to the arrival of OpenStack or Kubernetes.

Mirantis, he explains, is “very excited about this new foundation” because it creates both direction and pressure. Direction, because MCP (Model Context Protocol) now has a neutral home to mature inside. Pressure, because a standard emerging early forces vendors and enterprises to align before fragmentation takes hold.

Randy recalls the early days of cloud infrastructure, when Eucalyptus, CloudStack, and OpenNebula all competed for attention until OpenStack arrived with a community-first mentality. It wasn’t just code that won; it was ecosystem gravity. The Agentic AI Foundation, he argues, mirrors that dynamic. By bringing Anthropic, OpenAI, and others into the same orbit, the foundation is signaling a shared commitment to a standard for agent-to-agent and agent-to-tool communication.

This is crucial because the industry is moving quickly toward agentic architectures, but without agreement on the basic wiring. MCP has momentum, but the foundation gives it legitimacy and a path for evolution that goes beyond any single company. For enterprises, that signal alone reduces risk.

Why Enterprises Need This Foundation Now

Randy points out that while consumer AI appears mature, enterprise AI is still in the earliest stages. Companies have chatbots, some RAG systems, maybe a basic internal assistant—but none of these create meaningful competitive advantage. True differentiation comes only when businesses apply agentic systems to their proprietary, mission-critical internal data.

That is where the next wave of AI value will be created. A healthcare provider integrating agents with EHR systems. A financial services firm applying agents directly to its high-value trading data. A manufacturer automating root-cause analysis in complex supply chains. All of these depend on one thing: being able to deploy agents on-prem, connected to secure internal systems, with clear governance boundaries.

And that is exactly why Randy believes the Agentic AI Foundation’s timing is critical. It gives enterprises a neutral, standards-based entry point into agentic workflows—something they desperately need to reduce risk.

“One of the pain points,” he explains, “is that enterprises want to know where to start and they want to de-risk the process.” AI already comes with unknowns. Agentic AI adds new layers of complexity: identity, compliance, autonomy, and deep system interaction. A standard framework reduces the uncertainty that has slowed enterprise adoption.

The Rapid Rise of MCP

The transcript shows Randy giving a nuanced view of MCP’s adoption. While many point to “10,000 MCP servers” as proof of traction, Randy says that’s actually the least interesting metric. What matters is that MCP is becoming an expected interface—something vendors will expose directly, the same way they expose REST endpoints today.

He predicts a world where every SaaS application and every enterprise product ships with its own MCP server or endpoint. Salesforce, GitHub, or internal banking systems won’t just have APIs—they’ll have agent APIs. In parallel, enterprises will build their own internal MCP servers to represent their business logic, workflows, and specialized data.

This bifurcation—vendor MCP servers and internal bespoke MCP servers—will become fundamental to how agent systems operate. The foundation’s involvement accelerates this reality by giving enterprises confidence that MCP is not a proprietary experiment, but a standard with long-term backing.

The Foundation’s Components: MCP, Goose, and Agents.md

When discussing the three initial technologies—MCP, Goose, and Agents.md—Randy is honest about both strengths and gaps.

On the positive side, each represents a piece of a broader ecosystem for building agentic systems. MCP handles communication. Goose introduces a local-first, MCP-integrated agent framework. Agents.md promises a standardized way to describe agent context.

But he also expresses healthy skepticism. Goose’s “local-first” approach raises security and identity risks if taken literally. Enterprises cannot let agents running on laptops make authenticated calls into sensitive internal systems without robust governance and identity controls. Similarly, a single universal Agents.md file seems unlikely; context management is too dependent on use case and domain.

However, Randy believes the foundation is doing the right thing by starting small. OpenStack, Kubernetes, and other major open source ecosystems also began with limited scope before expanding rapidly. The key is community intent, and he sees that clearly here.

The Real Gaps: Governance, Identity, Evaluations, and Data Workflows

Randy also emphasizes the unsolved challenges at the heart of enterprise AI adoption.

Identity: Agents need transitive identity—they must act “as the user,” not as themselves. Without a standard way to safely delegate identity, enterprises cannot deploy agents into critical systems.

Governance: Shadow IT becomes “shadow agents.” Without guardrails, agents may access, modify, or send data in ways that breach compliance rules.

Safety: Data leakage, especially involving PII, could be catastrophic in regulated industries.

Evaluation: Unlike A/B testing in traditional digital systems, agent evaluation is subjective and context-dependent. The industry has no shared method for measuring accuracy, reliability, or hallucination risk.

Data Readiness: Clean, labeled, contextualized data remains the biggest blocker. AI is still garbage-in-garbage-out.

Where Kubernetes helped abstract infrastructure, agentic AI still lacks that cohesive abstraction layer. For Randy, this is where the foundation can have a long-term impact by enabling shared learning across industries. Financial services, healthcare, manufacturing, and other sectors experimenting with agents need a place to compare what works and what fails.

Mirantis’ Role in the Emerging Ecosystem

Randy outlines how Mirantis is approaching this shift. Historically known for open infrastructure—OpenStack, Kubernetes, and container platforms—Mirantis now sees an opportunity to extend upward into agentic infrastructure.

Mirantis is already exploring MCP support across all its products. Customers are asking for integration at the operations layer, not just development. While developers adopt AI tools rapidly, the operations world is still under-served. Agents for root-cause analysis, system diagnostics, automated remediation, and infrastructure correlation are areas where Mirantis sees practical enterprise demand.

Mirantis is also intentionally not rushing into a commercial agent platform. Randy’s view is that the market is too immature and changing too quickly for a product to “land.” Instead, Mirantis is following the same playbook it used during the rise of OpenStack and Kubernetes: lead with services, build expertise, create blueprints, and solve real customer problems before formalizing a product strategy.

The company is already delivering MCP-related services, including an internal “agentic maturity model,” and will be releasing more public materials and offerings in the coming months. Their strategic goal is clear—help enterprises cross the AI adoption chasm safely and systematically.

Why This Foundation Changes the AI Trajectory

Randy ties the conversation back to the broader ecosystem impact. Agentic AI is currently fragmented, experimental, and often risky. The Agentic AI Foundation represents the first major attempt to provide direction, shared vocabulary, and open governance.

Just as CNCF brought order to cloud native ecosystems, this foundation could become the anchor for agentic AI—defining patterns, best practices, and standardization paths for the next decade.

It will take time. It will involve experimentation, failed patterns, and iterative refinement. But Randy is clear: this is the moment the ecosystem needed. It gives enterprises permission to begin adopting agentic AI with confidence rather than hesitation.

The Future of Java: Project Valhalla and Value Types — Simon Ritter, Azul

Previous article

Why Kubernetes Won: The Perfect Storm Behind Cloud Native’s Biggest Success Story

Next article