AI Infrastructure

Future-Proofing AI: Why MCP Standards Beat Custom Agents in Rapidly Evolving Tech | TFiR

0

Guest: Randy Bias (LinkedIn)
Company: Mirantis
Show Name: The Agentic Enterprise
Topic: AI Infrastructure

When AI technology evolves every few months, how do you build systems that won’t require complete rewrites? Randy Bias, VP of Strategy & Technology at Mirantis, tackles this existential challenge facing enterprise AI teams. His answer challenges conventional wisdom: stop building custom agents that lock you into soon-to-be-obsolete frameworks, and start building composable systems around winning standards.


📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot

The Future-Proofing Problem

Model Context Protocol (MCP) is barely out of its infancy, yet enterprises are betting infrastructure decisions on it. The question isn’t whether MCP will evolve—it’s how quickly and how dramatically. For organizations investing in AI infrastructure, this rapid evolution creates a dilemma: move fast and risk obsolescence, or wait and fall behind competitors.

Bias’s approach resolves this tension through three interconnected strategies that prioritize adaptability over custom development.

Strategy One: Bet on Winning Standards

The first defense against rapid change is choosing the right foundation. “You go with the standards that seem to be winning—MCP, number one,” Bias states.

This isn’t about picking the newest technology—it’s about identifying which standards have momentum, ecosystem support, and architectural staying power. MCP emerged from Anthropic with clear specifications and immediate adoption across major AI platforms. The protocol’s design for agent-to-tool communication addresses a fundamental need that won’t disappear as the technology matures.

Standards provide a hedge against vendor lock-in and framework obsolescence. When the underlying implementation changes, systems built on standards can adapt without wholesale rewrites.

Strategy Two: Ride the Innovation Curve

Bias’s second strategy directly challenges the custom agent development model: use general-purpose agents with domain skills rather than building proprietary frameworks.

“Rather than trying to continue to maintain a lot of custom agents, you instead can run the innovation curve of the general purpose agents,” he explains.

This approach leverages a critical insight: companies like Anthropic, OpenAI, and others are solving hard problems in agent design—preventing infinite loops, improving reasoning capabilities, optimizing context management. These are problems that custom agent builders would need to solve independently, often with far fewer resources.

“Anyone who has taken and these other general purpose agents for a ride will understand that there’s fundamentally a difference between them,” Bias notes. “Why would you recreate all of that?”

By building on general-purpose agents, organizations benefit automatically from ongoing improvements. When Claude or Codex releases a new version with better reasoning or more efficient token usage, systems built on these platforms inherit those benefits without code changes.

The alternative—maintaining custom agents—creates technical debt that compounds over time. Each advancement in general-purpose agents widens the gap between what custom implementations can do and what’s possible with production-grade platforms.

Strategy Three: Build Composable, Pluggable Systems

The third pillar of Bias’s future-proofing strategy focuses on system architecture: proper abstraction layers that enable component swapping without policy rewrites.

This is where Mirantis’ MCP AdaptiveOps framework becomes relevant. “You need to think about having policy that can be applied regardless of the components,” Bias explains, “so that if you decide at some future date you want to remove a certain kind of MCP security gateway and replace it with a different one, you can do so without having to change your policy.”

The analogy extends to inference engines and other infrastructure components. A well-architected system treats these as pluggable modules rather than hardcoded dependencies.

Composability requires discipline. It means defining clean interfaces between components. It means separating policy from implementation. It means resisting the temptation to take shortcuts that create tight coupling.

But the payoff is substantial. When a better inference engine emerges, or when a security gateway vendor releases a breakthrough feature, composable systems can incorporate these improvements without disruption.

The Abstraction Layer Imperative

All three strategies converge on a single architectural principle: the right layers of abstraction at the right points. This isn’t abstract theory—it’s practical risk management.

“The pluggability and composability of the system that you build is really about having the right layers of abstractions at all the right points, so that as things mature, as technology evolves, you can swap out the things that make sense,” Bias summarizes.

These abstraction layers serve as shock absorbers for technological change. They allow internal components to evolve independently without cascading changes through the entire system.

De-Risking Enterprise AI Investment

For enterprise decision-makers, Bias’s approach offers a framework for AI investment that balances innovation with prudence. Custom agents might seem like competitive differentiators, but they’re more likely to become maintenance burdens.

The future-proof path prioritizes standards over proprietary frameworks, leverages ongoing innovation from platform providers, and builds systems that can adapt as technology matures. It’s not the path that generates the most GitHub stars or conference presentations. But it’s the path that keeps AI systems running—and evolving—five years from now.

Why SIOS Monitoring Beats Specialized Tools: Multi-Layer Failure Detection | Matthew Pollard

Previous article

Network Intelligence Gets Operationalized in 2026: Kentik CPO on AI Ops and Rising Data Center Costs

Next article