AI Infrastructure

How Mirantis’ k0rdent Is Rewriting the Future of AI Infrastructure: Dominic Wilde on Hybrid Cloud, GPUs, and “Metal to Models”

0

Guest: Dominic Wilde
Company: Mirantis
Show Name: An Eye on AI
Topic: AI Infrastructure

AI is forcing a once-in-a-generation rethink of enterprise infrastructure. Organizations aren’t just scaling up GPU clusters — they’re rebuilding how their clouds, networks, and virtualization stacks work. At KubeCon + CloudNativeCon, Dominic Wilde, SVP & GM of the Core Business at Mirantis, joined us to unpack what this shift really looks like and why companies need a more flexible, Kubernetes-native approach to unify their environments.

For years, the industry has talked about hybrid cloud as an aspiration. Today it’s a reality — and a messy one. Enterprises run virtual machines, containers, multiple Kubernetes clusters, and increasingly large AI workloads across private, public, and niche NeoCloud environments. The result is complexity. Not the good kind. The kind that slows down transformations and blocks AI teams from moving fast.

Dominic Wilde explains that this complexity is exactly what drove Mirantis to build k0rdent, a Kubernetes-native multi-cloud management plane designed to solve the sprawl problem. “k0rdent was built to enable customers to modernize without taking on the burden of a rip-and-replace,” he shares. It’s not a product that forces a new stack. Instead, it’s built to accommodate what companies already have — and bring order to it.

The platform acts as an “Uber control plane,” allowing organizations to interconnect Kubernetes clusters across any environment. Whether it’s private cloud, different public clouds, or mixed infrastructure, k0rdent gives teams a declarative, template-driven model to deploy services at scale. That architectural foundation is now proving essential as enterprises rethink their stack to accommodate AI.

The Rise of k0rdent Virtualization

One of the biggest announcements Mirantis brought to KubeCon was k0rdent Virtualization — a major step forward in unifying VMs and containers under one lifecycle and management layer. Wilde describes it as a way to help customers transition at their own pace. “We can take customers on a journey,” he says. “Not a rip and replace.”

The challenge is real: enterprises still rely heavily on VM-based workloads, even as they shift to Kubernetes and cloud-native patterns. AI only widens that gap. GPU-based workloads often run in specialized environments, while existing systems remain siloed. By bringing virtualization capabilities directly into k0rdent, Mirantis gives organizations a single operational surface for both legacy and modern applications.

And beyond convenience, this unification unlocks efficiency. It breaks down operational silos, aligns teams on a shared model, and improves visibility across environments.

AI Workloads Are Changing the Infrastructure Playbook

Wilde emphasizes that AI brings “new and more complex requirements” to infrastructure — especially around performance. Ensuring that GPU, CPU, and memory resources align correctly with specific AI jobs isn’t just a tuning exercise; it’s now a core architectural priority.

Networking complexity is another emerging challenge. AI systems rely on new data movement patterns that strain traditional enterprise networking. Memory management has become equally crucial, particularly as teams optimize for local memory usage, huge pages, and advanced mapping models.

Mirantis is responding by integrating these performance-sensitive capabilities into k0rdent Virtualization and an evolving k0rdent AI variant. The goal is simple: give enterprises the tools to deploy GPUs, slice them efficiently, and align resources with workload-specific performance needs.

A big part of this work comes from Mirantis’ engagement with rapidly scaling NeoCloud providers — cloud platforms purpose-built for AI. These environments push hardware, networking, and software to extremes. As Wilde puts it, “New things are uncovered as you go to scale,” and Mirantis is helping solve these challenges in real time.

Why Private Cloud Is Becoming Essential Again

One of the most interesting shifts Wilde highlights is the “renewed interest in private cloud.” With OpenStack still a major part of Mirantis’ heritage, he explains that enterprises are rediscovering the model not only for cost and scale but also because of sovereignty, repatriation trends, and the need for tighter control over GPU resources.

AI is accelerating this shift. Public cloud GPUs are expensive and scarce. Private environments offer predictability — but only if organizations can manage them without introducing silos. This is where Mirantis’ depth of experience with OpenStack, Kubernetes, and hybrid cloud operations becomes a strategic advantage.

Their work spans everything from bare metal up to model serving. Wilde describes it as supporting customers “from metal to models,” ensuring that teams can plan, orchestrate, and scale AI infrastructure with a clear path forward, not a patchwork of tools and providers.

Helping Companies Ask the Right AI Questions

A major issue in the enterprise remains education. Many organizations simply don’t know the right questions to ask when preparing for AI. Mirantis created an AI maturity model, now available publicly, to help customers evaluate their readiness and identify gaps.

The model covers everything from data layer readiness to GPU strategy, hybrid cloud integration, and operational patterns for inference and training. According to Wilde, this has become an increasingly popular resource because teams are under pressure to deliver AI outcomes without always understanding the foundational work required.

Partnerships Are Driving the AI Ecosystem

Mirantis has deepened its work with Nvidia, especially around the AI Factory for government and ongoing support for BlueField DPUs. Wilde notes that the collaboration has grown naturally as Mirantis helps NeoCloud builders and enterprises adopt GPU-accelerated architectures.

But he emphasizes that Mirantis’ ecosystem strategy mirrors Nvidia’s openness. Mirantis works with many partners across the stack, aligning with the belief that no single vendor can own the entire AI pipeline. In a rapidly evolving ecosystem, flexibility — and a composable architecture — matter far more than vendor lock-in.

What k0rdent Means for the Future

Looking ahead, Wilde sees k0rdent as more than a tool — it’s a way for organizations to break out of the cycle of silo-driven modernization. Too often, emerging technologies create their own isolated stacks. Companies stand up new clusters, new tools, new operational models. AI is at risk of becoming another silo.

k0rdent provides an alternative path: integrating AI workloads into the broader infrastructure environment while also unifying legacy and cloud-native systems. “AI is a marathon,” Wilde reminds us. It won’t be solved with a single platform purchase or a year-end sprint. The underlying infrastructure needs to be flexible enough to evolve continuously.

By focusing on choice, modularity, and open source principles, Mirantis is positioning k0rdent as a foundational layer for the next decade of cloud and AI operations. As enterprises blend containers, VMs, GPUs, distributed clusters, and increasingly complex AI pipelines, a unified control plane isn’t just helpful — it becomes essential.

Akamai Acquires Fermyon: WebAssembly-Based Serverless Reshapes Edge Computing for AI and Real-Time Applications | TFiR

Previous article

Why Traditional Containers Fail at Security: Edera’s Alex Zenla on Hardened Runtime | TFiR

Next article