Most enterprise AI conversations start and end at the model layer. Which LLM? Which cloud? Which vendor? But for the organizations actually trying to run AI in production, the real problem surfaces long before a single inference job runs: the infrastructure beneath the model is an order of magnitude more complex than anything most enterprises have managed before. And most of them are trying to build it alone.
The AI infrastructure stack spans compute—NVIDIA GPUs, custom silicon, next-generation accelerators—plus networking, storage, and orchestration. Each of those layers is evolving rapidly and independently. Assembling them into a coherent, production-grade platform requires expertise that only a small number of organizations possess internally. For everyone else, the result is a compounding integration tax: fragile architectures locked to a single vendor’s roadmap, accruing technical debt every quarter as the underlying components shift.
This is the problem Mirantis was built to solve. With roots in OpenStack—one of the most ambitious open-source infrastructure bets of the last decade—and a decade of Kubernetes-native operations, Mirantis carries institutional knowledge that cannot be replicated quickly. That heritage is now the foundation of k0rdent AI, the company’s open, composable platform for enterprise AI infrastructure.
The Guest: Richard Borenstein, SVP of Business Development at Mirantis
Key Takeaways
- Production AI infrastructure is not a single-layer problem—it spans compute, networking, storage, and orchestration, each evolving on its own timeline.
- Single-vendor lock-in creates an integration tax that compounds quarterly as AI infrastructure components evolve independently.
- Mirantis’s OpenStack and Kubernetes heritage provides the operational scar tissue that enterprises need when evaluating open infrastructure for production AI workloads.
- k0rdent AI is built around NVIDIA’s AI Cloud Ready Initiative—a structural partnership, not a logo relationship—giving customers access to validated, integrated technology from day one.
- The companies that win in AI infrastructure will be those that become connective tissue between best-of-breed components, not those trying to own every layer.
***
Speaking with TFiR, Richard Borenstein of Mirantis defined the current state of enterprise AI infrastructure and explained why the ecosystem-first approach of k0rdent AI is the architecture enterprises need to move from current infrastructure to AI-ready infrastructure without lock-in.
What Is the Real Complexity of Enterprise AI Infrastructure?
Enterprise AI is consistently framed as a model problem. In practice, the infrastructure layer is where most production deployments stall. The compute landscape alone—spanning NVIDIA GPUs, custom silicon, and next-generation accelerators—must be made to work with networking, storage, and orchestration layers that are each on independent and rapid development cycles.
Q: When enterprises start chasing AI, they often want to put everything together—hardware, software, services—on their own. At what point does that start to break down?
Richard Borenstein: “It really is an order of magnitude larger in terms of complexity. There are so many moving parts, so many pieces that need to be put together. The compute landscape alone is incredibly complex—from the NVIDIA piece to custom silicon to next-gen accelerators, you have to make this all work together: the networking layer, the storage layer, the orchestration layer. Each is evolving independently and rapidly. Trying to do it on your own is a tall task. And often the decisions that you make can lock you or your customers into a bet on a single vendor, and that’s very limiting. You may also be incurring an integration tax that compounds every quarter because of this inflexibility.”
Why Mirantis Chose an Ecosystem-First Architecture
The decision to build k0rdent AI as an ecosystem-first platform was deliberate. Mirantis’s partnership with NVIDIA is structural—k0rdent AI is built around NVIDIA’s AI Cloud Ready Initiative framework, which means customers inherit validated technology integrations rather than building and maintaining them independently. The goal is to absorb infrastructure complexity on behalf of enterprises so they can focus on their core business.
Q: What led Mirantis to focus on building a strong ecosystem around AI infrastructure rather than owning the stack directly?
Richard Borenstein: “We’ve made a deliberate choice to build an ecosystem first. Our partnership with NVIDIA—it’s not just a logo relationship, it’s structural. k0rdent AI is built around NVIDIA’s AI Cloud Ready framework, which means our customers are getting the benefit of all of that technology built into our knowledge base. That is a very important piece of how we can help manage the complexities on behalf of companies so they can get back and do what they do best. The companies that win in AI infrastructure will be the ones that become the connective tissue between the best components, not the ones trying to own every layer. Something I like to say often: act as the bridge between your current infrastructure and your AI-ready infrastructure.”
The OpenStack Lesson and Why It Matters for AI
Mirantis’s pedigree in open-source infrastructure goes back to OpenStack, which succeeded in breaking VMware’s lock-in narrative for cloud. Kubernetes did the same for containers. Mirantis argues the identical dynamic is now unfolding in AI infrastructure—and that its operational scar tissue from those prior cycles is a competitive moat that cannot be faked.
Q: How does Mirantis’s OpenStack heritage inform what you’re doing in AI infrastructure today?
Richard Borenstein: “The pedigree of Mirantis is one of being very open-source-minded. OpenStack was one of the most ambitious infrastructure bets of the last decade. It taught us something that’s now playing out in AI: enterprises will not permanently accept being held hostage by proprietary stacks. The underlying technology is commoditizing. OpenStack succeeded in breaking the VMware lock-in narrative for cloud. Kubernetes did it again for containers. Now the same dynamic is happening in AI infrastructure. Our OpenStack heritage gives us credibility and the scar tissue that you can’t fake. We know how to operationalize complex distributed infrastructure at enterprise scale. We know how to build communities of operators who trust the platform. And critically, we know how to maintain enterprise-grade supportability around open-source cores—which is exactly what enterprises need when they’re evaluating whether to trust open infrastructure for production AI workloads.”
Who k0rdent AI Is Built For
The platform is explicitly designed for the majority of enterprises—the organizations that do not have the internal knowledge base to assemble and operate AI infrastructure independently. Only a small number of the largest enterprises can do this work in-house. k0rdent AI is positioned as the shortcut that allows everyone else to access production-grade AI infrastructure without building the underlying operational expertise from scratch.
Q: Who is the typical enterprise that needs what Mirantis is building?
Richard Borenstein: “When enterprises look at k0rdent AI, they’re not getting a startup’s first product. They’re getting the distilled operational knowledge of teams who’ve run some of the world’s most demanding infrastructure environments. We help provide the shortcut of not having to force companies to piece this together, to learn the technologies themselves. In many of these environments—if not the vast majority—the resources aren’t there either. They don’t have the internal knowledge base to do this independently. It’s really only a handful of the major players that can do that. But what about everybody else?”





