AI Infrastructure

AI Performance Is Exposing New Infrastructure Gaps. Mirantis’ k0rdent Aims to Close Them.

0

Guest: Dominic Wilde
Company: Mirantis
Show Name: An Eye on AI
Topic: AI Infrastructure

As enterprises race to build and deploy AI systems, they’re hitting infrastructure problems that simply didn’t exist at smaller scales. In this clip from our conversation with Dominic Wilde, SVP & GM of the Core Business at Mirantis, he outlines the new performance, memory, and networking challenges emerging as companies re-architect their environments for AI — and how k0rdent is evolving into a full AI-ready platform to address them.

AI is forcing organizations to rethink their infrastructure at a deeper level than most anticipated. While cloud modernization has been underway for years, AI introduces a fundamentally different set of requirements — especially around performance and resource alignment. Wilde highlights performance as one of the biggest challenges customers now face. AI workloads are extremely sensitive to how GPU and CPU resources are paired with specific tasks, and misalignment leads to massive drops in efficiency.

This is where k0rdent’s design begins to show its advantage. Originally intended to tame Kubernetes sprawl and enable multi-cloud lifecycle management, k0rdent is now being expanded to help enterprises ensure their infrastructure aligns correctly with the demands of AI compute. Wilde explains that k0rdent is incorporating AI-focused capabilities that ensure the right combination of GPUs, CPUs, and memory resources are delivered to each workload. It’s not just a matter of provisioning; it’s about matching resources to the performance attributes required for a given training or inference job.

Memory architecture is another area undergoing significant change. AI workloads operate very differently from traditional enterprise applications. Wilde notes that teams must now consider factors like local memory usage, huge page configurations, and advanced memory mapping strategies — elements that rarely mattered in traditional VM- or container-based environments. These optimizations can significantly impact the performance of large models and GPU-intensive pipelines.

Networking also looks different in AI-driven architectures. Data movement patterns are more complex, and the bandwidth and latency requirements often exceed what enterprise environments were built to handle. Wilde emphasizes that these networking differences are becoming increasingly important as companies scale their AI clusters.

k0rdent is adding new capabilities to meet this shift. One key enhancement is the introduction of DRB-driven load balancing and workload movement. This allows organizations to automatically move workloads across infrastructure to ensure optimal performance, utilization, and balance. Instead of static allocation or oversizing clusters, k0rdent enables automated, performance-aware resource placement.

A particularly interesting aspect of Wilde’s comments is how fast this space is evolving. NeoCloud providers — modern cloud platforms designed specifically for AI workloads — are pushing hardware and software to unprecedented scales. Wilde points out that as these platforms grow, new failures and bottlenecks appear. Mirantis is working directly with many of these providers, helping them solve issues in real time. This collaboration is feeding back into k0rdent as Mirantis integrates solutions into a flexible, composable ecosystem of tools.

k0rdent’s composability is essential to this effort. Wilde emphasizes that customers aren’t forced into a narrow stack. They can integrate supporting tools from a catalog of partners, apply only the components they need, and design their AI environments around open building blocks rather than rigid architectures. This flexibility allows k0rdent to serve as the connective tissue across complex, multi-layered infrastructures — from baseline cloud resources to advanced AI-specific schedulers and GPU services.

What becomes clear in this clip is that AI infrastructure is not static. It is rapidly evolving as organizations scale their GPU environments and discover new operational obstacles. Mirantis is positioning k0rdent as a platform that not only solves today’s challenges but adapts quickly as new ones emerge. Whether it’s optimizing GPU alignment, tuning memory structures, evolving networking patterns, or driving automated workload movement, k0rdent is becoming a core control layer for AI-era infrastructure.

Why Secure Connectivity Is the Real Bottleneck for Enterprise AI Agents | Varun Talwar, Tetrate

Previous article

Why Multi Cloud Strategies Are Now Essential for Infrastructure Resilience | Greg Tucker, SIOS

Next article