AI Infrastructure

How Mirantis MCP AdaptiveOps Turns Agentic AI From Hype Into Production Reality

0

Guest: Randy Bias (LinkedIn)
Company: Mirantis
Show Name: The Agentic Enterprise
Topic: AI Infrastructure

The Model Context Protocol (MCP) is now open source under the Linux Foundation, but that’s just the beginning. The real challenge facing enterprises isn’t access to MCP — it’s knowing how to implement agentic AI systems that actually work in production. While most organizations focus on AI-assisted coding, their production operations teams are drowning in complexity. Mirantis has a different approach, and it starts with a fundamental question: why are you building custom AI agents when general-purpose agents can do the job better?

From Announcement to Implementation in Record Time

When the Linux Foundation launched the Agentic AI Foundation last week with Anthropic contributing the Model Context Protocol, enterprises gained access to a critical open source standard. Within days, Mirantis responded with MCP AdaptiveOps — a comprehensive services framework designed to help organizations cross what Randy Bias, VP of Strategy & Technology at Mirantis, calls “the agentic AI chasm.”

“It seems really apparent that we’re moving to an agentic world,” Bias explains. “We’re seeing across our customer base really active conversations about deploying agents and MCP servers for troubleshooting problems in production, particularly around Kubernetes clusters.”

This rapid movement from standard to implementation showcases the velocity that open source enables. But it also reveals a pattern Bias has seen before, during the early days of cloud native adoption: technology moves fast, but enterprises need guidance to avoid costly mistakes.

The Operations Gap Nobody’s Talking About

While AI-assisted coding tools dominate headlines, Bias argues that production operations represents a massively underserved opportunity for agentic AI. “People are focused on AI coding, but there’s a lot more problems with production,” he notes. “What do you do once your code is actually running in production? How do you make sure it continues to run? How do you deal with problems and failures? Agents really should be a key part of that picture.”

Mirantis is releasing proof-of-concept work on their T Zero (t0) blog demonstrating how Claude and MCP servers enable automated triage of production problems. The approach uses what Bias calls “event-driven agent triage” — where production systems, not developers, drive agent behavior. When a Kubernetes cluster experiences issues or a code release triggers alerts, agents automatically investigate using real-time operational data.

This represents a fundamental shift in how enterprises should think about AI agents: not as assistants waiting for human prompts, but as autonomous systems responding to production events.

The Agentic Maturity Model and Service Framework

MCP AdaptiveOps is built around an agentic maturity model that helps enterprises understand where they are in their journey. The framework ranges from basic frontier model usage and simple chatbots to fully autonomous systems operating on sensitive enterprise data.

“Depending on where you are, you figure out which services make sense for you,” Bias explains. The offerings span from two-day assessments to help organizations identify low-hanging fruit, all the way to 16-week platform implementations for enterprises ready to achieve 10x or 100x operational leverage through agents.

The crown jewel is the MCP Server Factory — a three-to-six-week engagement where Mirantis helps build production-ready MCP servers. But Bias challenges the assumption that enterprises need custom agents at all.

Why General-Purpose Agents Beat Custom Development

“Right now we see a lot of people building custom agents and custom MCP servers, and I really question whether you need custom agents,” Bias states. Instead, he advocates for a different architecture: use general-purpose agents like Claude, Codex, or Goose, then give them domain expertise through skills and tools.

Skills encode the knowledge and wisdom of human operators — the processes and workflows that domain experts use. MCP servers provide tools for real-time introspection of running systems. Together, they transform a general-purpose agent into a domain expert without the overhead of maintaining custom agent frameworks.

“You take domain skills and domain tools, and suddenly you turn a general purpose agent into an expert in a given domain,” Bias explains. “That’s more likely how things will evolve rather than building lots of custom agents.”

This approach also future-proofs implementations. As general-purpose agents improve, enterprises automatically benefit from innovations in reasoning, loop prevention, and decision-making — capabilities that custom agents must recreate from scratch.

AI Governance Without Paralysis

When it comes to AI risk and compliance, Bias takes a pragmatic stance: focus on existing regulations before worrying about hypothetical AI-specific rules. “If you’re a healthcare company, you have to be concerned about HIPAA. If you’re in financial services, you need to be concerned about all the regulations that apply there,” he notes.

The fundamental challenge is ensuring that sensitive data — PII, HIPAA-protected information, financial records — never leaves enterprise boundaries. For organizations handling highly sensitive data, this means deploying on-premises inference engines and implementing policy frameworks that ensure agents only use approved models.

“We have to start thinking about deploying inference engines and LLMs on site, because we don’t want that to leave our four walls,” Bias explains. Solutions like Kordent AI enable enterprises to build AI sovereign factories where policy controls which agents can access sensitive data and which inference engines they’re permitted to use.

What MCP in Linux Foundation Changes

With MCP now part of the Linux Foundation’s Agentic AI Foundation, Bias believes the protocol will become the de facto standard. “MCP going into the Linux Foundation is going to suck the oxygen out of the room for a lot of other aspirational agents and MCP protocols,” he predicts.

This standardization matters because the best technical solution doesn’t always win — the one with mass adoption does. “We’ve seen time and time again that the best technological solution isn’t the one that wins, it’s the one that gets mass adoption,” Bias notes, drawing parallels to OpenStack and Kubernetes in their early days.

For Mirantis customers, this means betting on MCP is betting on the winning standard. The ecosystem around MCP will grow, innovation will accelerate, and enterprises that move now will have first-mover advantages in production agentic AI.

The Path Forward

As enterprises rush to embrace agentic AI, Mirantis offers a clear message: start with where you are, focus on operational use cases, avoid custom agent development, and build flexible architectures around open standards like MCP.

“The wave to ride is here,” Bias concludes. Organizations interested in MCP AdaptiveOps can start with assessments on the Mirantis website, and technical audiences should follow the T Zero blog where Mirantis is publishing detailed explorations of AI-native patterns, event-driven operations, and production agent deployment.

The agentic AI chasm is real, but the path across it is becoming clearer — and it runs through open source standards, general-purpose agents, and a relentless focus on production operations.

How Kubernetes AI Conformance Bridges the Gap Between Cloud Native and AI Workloads

Previous article

Why Zero-Click Search Demands Edge-Native AI Applications | Ari Weil, Akamai

Next article