Kubernetes has a complexity problem. Despite its power and flexibility, managing container orchestration at scale remains challenging for even experienced developers. The natural solution seems obvious: leverage AI to automate routine tasks and reduce operational friction. But there’s a catch that many AI-powered tools ignore: developers don’t want to surrender control.
This tension between automation and autonomy sits at the heart of modern DevOps challenges. Kyle Wheeler, General Manager for Lens at Mirantis, understands this paradox intimately. His team has built Lens Prism specifically to navigate these competing demands, creating an AI assistant that enhances developer capabilities without overstepping boundaries.
Guardrails as Core Architecture
Mirantis’ approach starts with a fundamental principle: respect existing access controls. “Guardrails have always been really important to us at Lens,” Wheeler explains. The platform employs rule-based access controls that mirror whatever permissions developers already possess through their kubeconfig files and tokens.
This isn’t just a security feature; it’s an architectural decision that shapes how the AI operates. Lens Prism only sees what individual developers can access, creating a natural boundary that prevents the AI from exceeding human permissions. The system provides read-only insights into cluster operations, understanding what’s happening without making unauthorized changes.
The Read-Only Philosophy
Current Lens Prism capabilities deliberately limit AI actions to observation and recommendation. When the system identifies issues or optimization opportunities, it surfaces relevant commands but never executes them automatically. Developers must manually copy and paste suggested commands into their terminal sessions.
This might seem inefficient compared to fully automated solutions, but Wheeler argues that it strikes the right balance. ‘It’s, again, at the user’s desire and control,’ he notes. The copy-paste requirement ensures that developers review every action before execution, preserving the human decision-making layer that many organizations require.”
Beyond Current Limitations
Despite current restrictions, Wheeler acknowledges that Lens Prism could theoretically do much more. Internal teams at Mirantis have experimented with removing guardrails in demo environments, allowing the AI to execute commands and implement code changes directly. These experiments reveal both the potential and the challenges of expanding AI capabilities.
“We can already do some of this today, but because of our understanding—even our internal cloud developers who have been using Lens Prism for some time—there’s resistance to letting AI run wild,” Wheeler observes. This resistance isn’t a technical limitation but an organizational reality.
The Future Vision
Wheeler envisions a more automated future where Lens Prism could handle routine maintenance tasks autonomously. Imagine requesting a daily report that not only identifies cluster issues but also fixes them automatically. The technology exists; the challenge lies in balancing automation with control requirements.
“That might look like, ‘Hey, set up a daily report and just tell me what’s going on with my clusters—and also go ahead and fix the things that are wrong with them,’” Wheeler explains. But implementing this vision requires careful consideration of company policies and individual user preferences.
Industry Context and Implications
The Lens Prism approach reflects broader industry trends around AI governance and responsible automation. While some vendors push for maximum automation, Mirantis recognizes that enterprise adoption requires trust and control mechanisms. Heavily regulated industries, in particular, need AI tools that enhance human capabilities without creating compliance risks.
This philosophy extends beyond Kubernetes to cloud cost optimization, policy enforcement, and observability challenges. As Wheeler notes, geopolitical changes and regulatory requirements are reshaping how organizations approach AI adoption in infrastructure management.
The Control Plane Concept
Wheeler introduces an interesting framework: treating AI permissions like a control plane that organizations can configure according to their risk tolerance. Individual developers might choose read-only AI assistance, while others could enable more automated capabilities. The key is making this configurable rather than imposing one-size-fits-all solutions.
“This is the box that AI can live in within my environment, and that can be from an individual developer’s standpoint,” Wheeler explains. This granular control acknowledges that AI adoption isn’t binary; it’s a spectrum of trust and automation that varies by context and user preference.
Looking Ahead
The Kubernetes ecosystem continues evolving rapidly, with AI capabilities becoming standard rather than experimental. However, Wheeler’s approach suggests that successful AI tools won’t be those that replace human judgment but those that augment it intelligently. Lens Prism represents a pragmatic middle path between manual complexity and automated uncertainty.
As organizations grapple with AI governance frameworks and responsible automation policies, tools like Lens Prism offer a template for balanced implementation. The challenge isn’t building more powerful AI; it’s building AI that organizations actually want to adopt and trust with their critical infrastructure.
The conversation around Kubernetes AI is just beginning, but Wheeler’s insights suggest that the winners will be those who prioritize developer control alongside operational efficiency. In a world where AI capabilities advance faster than organizational trust, guardrails aren’t limitations – they’re competitive advantages.





