Guest: Randy Bias (LinkedIn)
Company: Mirantis
Show Name: An Eye on AI
Topics: AI Governance, Agentic AI
AI may be the hottest technology of the decade, but Randy Bias says enterprises are stuck in déjà vu. “It feels a bit like 2010 to 2012 for cloud native,” says the Mirantis VP of Strategy & Technology. “Everyone’s excited, experimenting—and ignoring the hard parts like security and compliance.”
The Cloud-Native Parallel
Back in the early cloud days, developers rushed to deploy workloads to AWS and OpenStack without governance. Bias sees the same behavior now, as organizations race to build and deploy AI agents. “They’re getting leverage from frontier models,” he says, “mostly around white-collar tasks like coding or marketing.” But few have figured out how to safely connect those agents to their most valuable and regulated data—financial systems, customer PII, healthcare records.
The Model Context Protocol (MCP), introduced by Anthropic, has quickly become the “glue” of the agentic AI ecosystem. It’s what allows agents to call APIs, run SQL queries, and connect across tools. But Bias points out a serious gap: “When MCP launched, it had no authentication or security at all. You’d connect it directly to apps on your desktop—and it would just start making calls. Where’s that data going?”
While authentication was later added, Bias argues it’s still not enough for enterprise adoption. Like early cloud-native tooling, the architecture assumes trust where it shouldn’t. Security and compliance, he says, are “bolt-ons” rather than core design principles.
Where Enterprises Are Stuck
The opportunity is huge: AI agents that can process internal data securely could unlock competitive advantages across industries. But Bias cautions that the transition from experimentation to production will take more than enthusiasm. “People are getting immediate value out of AI,” he says, “but how to run these things in production is still an afterthought.”
For most enterprises, that means they need to rethink deployment architectures—internal large language models (LLMs), private inference environments, and governance frameworks that define exactly what agents can access and how they’re monitored.
The Path Forward
Bias believes enterprises that treat AI agents like ungoverned prototypes will quickly hit the same walls cloud pioneers did a decade ago. “If you want to do something with healthcare records,” he says, “you’re not going to send that to OpenAI. You’ll deploy an internal LLM.”
In other words, the future of enterprise AI won’t just be smarter—it will have to be safer. Bias sees this as the inflection point: move from “useful toys” to production-grade, compliant AI infrastructure that enterprises can actually trust.





