AI Infrastructure

How Mirantis and Nvidia Are Shaping an Open AI Infrastructure Ecosystem for the Enterprise

0

Guest: Dominic Wilde
Company: Mirantis
Show Name: An Eye on AI
Topic: AI Infrastructure

AI has entered the stage where no single vendor can solve the entire problem. In this clip, Dominic Wilde, SVP & GM of the Core Business at Mirantis, explains how Mirantis’ growing collaboration with Nvidia — along with a broader partner ecosystem — is helping enterprises adopt AI without creating new silos or sacrificing flexibility. As Wilde puts it, AI is a marathon, and organizations need infrastructure that evolves with them rather than locking them into a rigid stack.

AI infrastructure is more complex than it has ever been. It spans GPUs, DPUs, virtualization, Kubernetes, hybrid cloud, data pipelines, and increasingly, model serving architectures. While Nvidia remains the clear market leader — especially in accelerated computing and AI — the ecosystem needed to operationalize AI extends far beyond GPUs alone. Wilde highlights that Mirantis has been deepening its partnerships to support this broader reality.

A central example is Mirantis’ recent work with Nvidia showcased at Nvidia GTC. The two companies announced joint efforts around the AI Factory for government, a reference architecture and platform aimed at helping public sector organizations deploy secure, scalable, GPU-powered AI environments. Mirantis is contributing its expertise in deployment, lifecycle management, and production-grade Kubernetes operations — areas where many enterprises still struggle.

Mirantis is also working closely with Nvidia on supporting both current and next-generation BlueField DPUs. These DPUs are becoming essential in AI-driven data centers, providing hardware acceleration for networking, security, and storage workloads. Supporting them requires deep infrastructure knowledge, something Mirantis brings through its long history in OpenStack, Kubernetes, and large-scale private cloud operations.

But what stands out in Wilde’s comments is not just the partnership with Nvidia, but Mirantis’ philosophical alignment with how Nvidia approaches the ecosystem. Nvidia is deeply open in how it collaborates with partners — a mindset that mirrors Mirantis’ commitment to open source, interoperability, and composability. Wilde notes that Mirantis intends to stay equally open in its own ecosystem strategy, working with a wide range of partners rather than pushing customers toward a closed, single-vendor stack.

This openness becomes even more important when considering how enterprises are evolving their infrastructure for AI. Wilde points out a key pattern seen throughout history: new technologies often arrive as silos, completely disconnected from legacy workloads and operational models. Organizations set up new clusters, new tooling, and parallel environments that quickly become difficult to manage.

Mirantis’ goal with k0rdent is to stop that from happening again with AI.

k0rdent provides a unified lifecycle and orchestration layer that brings together virtual machines, containers, and AI workloads into a single operational model. Instead of forcing customers into a fully opinionated “rip-and-replace” architecture — which some vendors promote as the fastest path to AI modernization — k0rdent embraces the diversity of real enterprise environments. It supports different Kubernetes distributions, different virtualization layers, and a range of partner technologies.

Wilde stresses that modernization is not about abandoning what works. It’s about giving customers choices: what to change, when to change, and what to keep. This is where open source principles and the Mirantis approach align. k0rdent allows customers to adopt AI at their own pace, incrementally breaking down silos rather than creating new ones.

The need for this flexible, composable model becomes even more critical as AI workloads increase in scale. Wilde emphasizes that AI isn’t something companies can “figure out by the end of the year.” Infrastructure, tooling, and operational models will continue to evolve. Some AI-driven transformations will unfold over multiple years. Others — such as GPU availability and high-speed networking requirements — demand immediate responses.

Partnering with Nvidia and others allows Mirantis to address this dual challenge. Together, they help enterprises adopt cutting-edge hardware such as BlueField DPUs, deploy high-performance AI clusters, and integrate these environments with existing virtualized and containerized workloads. At the same time, Mirantis ensures that customers retain the freedom to adapt their stack as the AI space evolves.

This dynamic is shaping a broader trend across the industry: AI infrastructure is becoming increasingly modular. Whether it’s hardware accelerators, Kubernetes control planes, storage solutions, or model-serving frameworks, enterprises are seeking integrations rather than monolithic stacks. Mirantis’ role — and the role of k0rdent — is to orchestrate this modularity, providing the glue that allows enterprises to run AI next to legacy workloads without losing operational coherence.

Wilde’s observation that AI is a “marathon, not a sprint” reflects a growing sentiment across the industry. The organizations that succeed long term will be those that adopt flexible architecture, open ecosystems, and infrastructure that evolves with their AI ambitions. The partnership-driven model embraced by Mirantis places it at the center of this transition.

 

Why Kubernetes Data Service Orchestration Went From Invisible to Urgent | Julian Fischer, anynines

Previous article

How CISOs Can Navigate AI Compliance Across Global Regulations | Steve Winterfeld, Akamai

Next article