Enterprises racing to build AI infrastructure are falling into the same trap: assembling best-of-breed components independently sounds strategic until the integration failures, version conflicts, and architectural lock-in start compounding. The result is an “integration tax” that grows every quarter, best engineers buried in plumbing instead of differentiation, and an architecture that’s already deprecated by the time it reaches production. The window to get AI infrastructure right—without rebuilding from scratch every 18 months—is narrowing fast.
The answer, increasingly, is composable, pre-integrated AI infrastructure built on open-source foundations—not proprietary monoliths. Mirantis, with decades of operational experience running OpenStack at Fortune 1000 scale and a Kubernetes-native architecture, is positioning its k0rdent AI platform as the OS for enterprise AI infrastructure: sovereign, vendor-neutral, and composable by design.
The Guest: Richard Borenstein, SVP of Growth & Business Development at Mirantis
Key Takeaways
- The DIY AI infrastructure tax is enormous and most enterprises are still underestimating it—assembling a production-grade AI stack means dozens of interdependent decisions, each a months-long project on its own.
- Mirantis k0rdent AI’s composable, declarative architecture lets organizations swap accelerators, storage backends, and tooling without rebuilding the entire stack—composability is the answer to a landscape in permanent beta.
- The NVIDIA partnership is structural, not a logo relationship: k0rdent AI is built around NVIDIA’s architecture framework, with pre-validated reference architectures and certified integrations including NVIDIA Run:ai and NCX Infra Controller.
- GPU utilization at many enterprises sits at just 15–20%—Mirantis addresses this through bin packing, workload placement, and multi-tenancy built into k0rdent AI’s control plane.
- Digital sovereignty is a first-class design principle: k0rdent AI’s inference routing and data residency controls help enterprises meet GDPR and cross-geography security requirements without sacrificing flexibility.
***
In this exclusive interview with Swapnil Bhartiya at TFiR, Richard Borenstein, SVP of Growth & Business Development at Mirantis, explains why enterprise AI infrastructure is an order of magnitude more complex than most organizations anticipate—and how Mirantis k0rdent AI, a composable, Kubernetes-native platform rooted in open-source principles, helps Fortune 1000 companies move from infrastructure acquisition to production AI without the compounding costs of the DIY approach.
The AI Infrastructure Complexity Problem: Why DIY Breaks Down
The AI compute landscape alone is extraordinarily complex—spanning NVIDIA GPUs, custom silicon, next-generation accelerators, networking layers, storage tiers, and orchestration stacks, each evolving independently and rapidly. For enterprises without deep infrastructure expertise, attempting to assemble these components independently means owning every integration failure, every version conflict, and every 3am incident. Mirantis calls this the “integration tax”—a compounding cost that grows every quarter as architectural decisions made in isolation become harder to unwind.
Q: At what point does the DIY approach to AI infrastructure start to break down?
Richard Borenstein: “The AI infrastructure do-it-yourself tax is enormous, and most enterprises are still underestimating it. Assembling a production-grade AI stack means making dozens of interdependent decisions—which GPU operator, which network fabric, which storage tier, how you handle multi-tenancy, how you expose the environment to data science teams, how you handle day-two operations. Each of these is like a six-month project in and of itself, and companies that try to build it from scratch are committing their best engineers to the guts and the plumbing, rather than differentiation.”
Borenstein points to a particularly acute risk: by the time an enterprise finishes building a custom AI stack, the technology has moved on. Pre-integrated platforms like Mirantis k0rdent AI allow organizations to stay current without rebuilding from scratch every 18 months—a critical advantage in a market where networking architectures for AI (InfiniBand, RoCE, NVLink fabrics) are all evolving simultaneously.
Q: What are the three buckets of risk for enterprises tackling AI infrastructure alone?
Richard Borenstein: “It’s really three buckets. Tech stack complexity is meaningful—building a cloud requires proper tenant isolation and fault tolerance, addressing multi-tenancy requirements, expertise in virtualization, GPU configuration and partitioning, high-performance networking and storage. Buying the hardware is the easy part; operationalizing it is the harder part. Then there’s the hyperscaler-like experience companies are aspiring to—a seamless, low-latency, high-performance environment with cloud console, APIs, software, value-added services, accurate metering and billing, security, and proper multi-tenancy. And finally, operational efficiency: they need to build and operate this with a relatively small team, or the math doesn’t work. We’re seeing 15, 20% GPU utilization in many environments, and we’re helping companies optimize through bin packing, workload placement, and multi-tenancy.”
Mirantis k0rdent AI: The OS for AI Infrastructure
Mirantis k0rdent AI is a composable, Kubernetes-native platform spanning bare metal provisioning, virtualization, GPU orchestration, and AI inference services—what Mirantis describes as Metal-to-Model infrastructure. The platform is built around a declarative architecture, meaning organizations can swap components (accelerators, storage backends, emerging tooling) without rebuilding the entire stack. Mirantis positions k0rdent AI as the “OS for AI infrastructure”—enabling applications rather than locking customers into them.
The NVIDIA relationship is structural. k0rdent AI is built around NVIDIA’s architecture framework, with integrations including NVIDIA Run:ai (AI workload and GPU orchestration), NVIDIA NCX Infra Controller (for managing large-scale GPU fleets), and NVIDIA NIM microservices for LLM serving. Mirantis holds Foundational NVIDIA AI Cloud ISV Partner status, with pre-validated reference architectures for GPU cloud deployments. The platform also integrates with Dell and Supermicro hardware, providing simple, tested, pre-validated solutions for enterprise procurement.
Q: How does k0rdent AI’s composable architecture protect enterprises from vendor lock-in?
Richard Borenstein: “Composability is the architectural answer to a landscape in permanent beta. Nobody knows what the dominant AI hardware looks like in five years—the model layer is changing faster than any infrastructure team can track, networking architectures for AI like InfiniBand, RoCE, or NVLink fabrics are all evolving simultaneously. We made k0rdent AI a composable, declarative architecture, which means organizations can swap components without rebuilding the entire stack. You can adopt a new accelerator, plug in a different storage backend, integrate emerging tooling without starting over. That’s not an accident—it’s a design principle we’ve held since our Kubernetes-native foundation. Composability is how you give organizations the freedom to evolve without the cost of chaos. It’s why we call k0rdent AI the OS for AI infrastructure—an OS doesn’t lock you into applications, it enables them.”
Q: What does the turnkey GPU cloud offering look like for enterprises?
Richard Borenstein: “k0rdent AI provides a turnkey GPU cloud—a cloud in a box—that already has integrated the infrastructure automation, Kubernetes, service provisioning, and customer-facing cloud services into a single platform. It gives both operators and customers a hyperscaler-like cloud experience, fully packaged, automated, and ready to monetize. They can already have all of this pre-assembled—that makes the internal experience much easier, the external customer-facing experience much more valuable. The platform delivers a clear separation of concerns: we build and run the hard part, and you focus on your business so your customers can consume it.”
OpenStack Lessons Applied to Enterprise AI
Mirantis’s open-source heritage is a central competitive differentiator. The company built its reputation managing OpenStack at Fortune 1000 scale—helping enterprises run some of the most demanding distributed infrastructure environments in the world, from large-scale cloud deployments to performance-intensive industries. OpenStack taught Mirantis that enterprises will not permanently accept being held hostage by proprietary stacks: OpenStack broke VMware lock-in for cloud, Kubernetes did it again for containers, and the same dynamic is now playing out in AI infrastructure. Mirantis applies that operational scar tissue—tenant isolation, fault tolerance, enterprise-grade supportability around open-source cores—directly to k0rdent AI.
Q: What patterns from the OpenStack era are you bringing into AI infrastructure today?
Richard Borenstein: “It’s not just that we’ve built the most capable and robust AI platform for the modern era—it’s the human equity we apply against it as well. Our professional services and managed services is what we learned: these companies need help, and they need hand-holding at times, and they need their people trained and certified. We become partners with them, not just at the technology level, but at the human level—that allows them to call us in when they need help, or even use our managed services that actually run their platforms and manage the day-to-day for them. The other big takeaway is the nature of evolution as something you have to factor in—this needs to be able to change on a daily basis, if required. If you don’t have the tooling to do that, you will be stuck in long development cycles beholden to a multitude of vendors.”
Q: How does Mirantis’s open-source background shape its approach to AI sovereignty?
Richard Borenstein: “Throughout our evolution from OpenStack into Kubernetes and container orchestration and developer platforms, we’ve remained committed to openness, operational control, and customer choice of technology. We’ve extended all of that now to meet the emerging requirements around AI infrastructure with a very key focus on digital sovereignty and modern virtualization. We’ve built into the platform the ability to do inference routing, so you can ensure sovereignty and GDPR compliance and security. We are pioneers in that private cloud evolution, and this gives us a very informed capability to help guide companies to achieve those outcomes.”
The Mindset Shift: From Products to Ecosystem
Borenstein argues that the most important shift enterprises need to make in AI infrastructure isn’t technical—it’s strategic. The companies that win in AI infrastructure will be the ones that become the connective tissue between the best components, not those trying to own every layer. With the right ecosystem partner, enterprises can focus on their core business and on how they will use AI to serve their corporate goals, rather than carrying the operational burden of assembling and maintaining the underlying stack.
Q: What is the one mindset shift enterprise teams need to make when building AI infrastructure this year?
Richard Borenstein: “They need to think about the top end of the stack—how they’re going to empower AI to serve their corporate goals. With the right partner, they can actually focus on their core business. They don’t have to carry that burden. We’ve gone through this phase of ‘I’m going to build it myself, I’m going to package it together’—as we’ve seen in previous technology cycles. At first you think ‘I’m going to own this,’ and then you realize there is really no way for me to focus on my business if I’m actually doing the nitty gritty and trying to piece this together, not only for day one, but for the long term.”





