AI Infrastructure

Why Enterprise AI Needs MCP Standardization—And What Could Go Wrong | Randy Bias, Mirantis

0

Guest: Randy Bias (LinkedIn)
Company: Mirantis
Show Name: An Eye on AI
Topic: AI Infrastructure

Enterprise AI agent adoption hinges on a critical question: how do you build autonomous systems that work across vendors without creating security nightmares? Randy Bias, VP of Strategy & Technology at Mirantis, argues that the Agentic AI Foundation (AAIF) represents a pivotal moment—but warns that the rush to deploy local agents could recreate the shadow IT problems that plagued early cloud adoption.

The Linux Foundation’s launch of the Agentic AI Foundation (AAIF) with Anthropic, OpenAI, and other key players signals something important: the enterprise AI agent market is consolidating around standards faster than many expected. For Bias, this isn’t just about technology—it’s about de-risking adoption for enterprises who need confidence before they invest.

“Enterprises want to know where to start with confidence,” Bias explains. “They already know there’s risk at the bleeding edge. They know AI carries risk. Connecting agents to specialized data brings risk. So how do you de-risk it? You go where many others are at the table, all learning together on a level playing field.”

From Experiment to Infrastructure

The trajectory of MCP adoption reveals how quickly the agent ecosystem is maturing. With over 10,000 MCP servers and integrations across major platforms like Copilot, Gemini, ChatGPT, and VS Code, the protocol has achieved remarkable traction. But Bias sees a more significant shift ahead.

“What’s really interesting isn’t the 10,000 MCP servers,” he says. “It’s that you’re going to see github.com/mcp becoming standard. Every SaaS provider will have an MCP endpoint. You’ll have a REST API and an agent API. That agent API will be MCP.”

Mirantis is already responding to this trend. The company is working to bake MCP servers into all its products because customers are demanding a standard way to connect agents across their infrastructure. This reflects a broader pattern: vendors who want to participate in the agentic AI economy will need to ship with MCP support built in.

The future landscape will feature two types of MCP servers, according to Bias. Vendors will provide service endpoints and product integrations, while enterprises will build custom MCP servers tailored to their specific business processes, data sets, and internal tools. This division of labor makes sense—enterprises don’t want to build integrations for every vendor product, and vendors can differentiate on how well their MCP implementations serve customer needs.

The Local Agent Security Problem

But standardization alone doesn’t solve everything. Bias raises a critical concern about the current enthusiasm for local-first agent frameworks like Goose: they create security and compliance blind spots that regulated industries can’t afford.

“If I’m running an MCP server on my laptop and it’s acting on my behalf, accessing internal resources using my identity, I have little control over that data flow,” he explains. “That MCP server calls a REST API or SQL interface inside the enterprise, and it looks like an authentic call from me—even though it’s actually an autonomous agent. Then that agent calls an external LLM, potentially sending data I don’t want it to.”

This is the shadow agent problem. Just as shadow IT proliferated in early cloud days when employees spun up unauthorized services, unrestricted local agents could create ungoverned pathways for sensitive data to leak outside enterprise boundaries. For financial services, healthcare, and other regulated sectors, the reputational and legal consequences could be devastating.

Bias advocates for a different approach: running MCP servers over HTTP in centralized systems where governance, policy, and guardrails can be applied. “We need to put them into centralized systems where we can have controls and compliance, especially in highly regulated industries,” he says.

Why Standards Win—And What Comes Next

The Agentic AI Foundation‘s rapid iteration gives Bias confidence that the protocol will continue to evolve to meet enterprise needs. Anthropic’s track record of listening to feedback—adding authentication when security concerns emerged, launching a registry when discoverability became an issue—demonstrates the kind of responsiveness that wins developer trust.

“The best technology doesn’t always win,” Bias notes. “But those that update very quickly, listen to customer feedback, and meet people where they are in the market—those almost always win.”

As enterprises navigate their agentic AI journeys, the Agentic AI Foundation provides something invaluable: a common ground where they can collaborate, learn, and build with reduced vendor lock-in risk. The protocol is positioning itself to become as fundamental to agent architectures as REST APIs are to web services.

But adoption will require balancing the excitement of AI automation with the discipline of security controls. The promise is enormous—truly autonomous agents that can orchestrate across enterprise systems, vendors, and data sources. The risk is equally significant if governance frameworks don’t keep pace with deployment speed.

For decision-makers evaluating agent strategies, Bias’s message is clear: embrace MCP as the emerging standard, but implement it with centralized control and compliance from day one. The agent revolution is happening. How enterprises manage it will determine whether it delivers transformational value or uncontrollable risk.

Enterprises Leave $24.8B on Table by Choosing Closed AI Models Over Open Alternatives

Previous article

Crossplane Graduates: How Declarative Control Is Reshaping Cloud Infrastructure and AI Ops

Next article