As enterprises race to implement artificial intelligence (AI) capabilities, a significant challenge emerges: how to efficiently deploy and scale inference workloads across distributed environments. Enter Mirantis and Gcore, two companies at the intersection of cloud-native innovation and AI infrastructure, partnering to deliver a seamless, scalable platform for global AI deployment.
Their recent collaboration—highlighted at KubeCon + CloudNativeCon Europe—signals not just a technological alliance, but a shared vision: to democratize and simplify AI deployment at scale.
Gcore: Global Edge Computing at Enterprise Scale
Gcore has established itself as a formidable player in the global edge and AI infrastructure space. Founded in 2014, the company initially focused on network capabilities before expanding its offerings to encompass the entire technology stack.
“We are an international edge and AI provider with 180 points of presence around the globe serving content, securing end applications, and distributing inference deployments connected to our global network backbone,” explains Seva Vayner, Product Director, Edge Cloud and AI at Gcore. “This delivers low latency and real-time interaction with AI models.”
With 14,000 peers and partners worldwide, Gcore provides infrastructure-as-a-service, platform-as-a-service, and AI capabilities—having entered the AI space in 2022, ahead of many competitors. Their extensive network allows for distributed model deployment, addressing the critical requirements of latency-sensitive AI applications.
Mirantis: Evolving Through Technology Paradigm Shifts
On the other hand, Mirantis has demonstrated remarkable adaptability throughout the evolution of cloud computing. The company established itself during the OpenStack era, expanded through the acquisition of Docker’s Enterprise Platform business, and now sits at the forefront of Kubernetes orchestration and AI infrastructure.
“When you think about a journey of a company like ours, you start by focusing on technology and building expertise there,” Alex Freedland, Co-Founder and CEO of Mirantis notes. “But as you mature, your real value to customers is walking them through technology cycle transitions.”
Freedland frames this evolution in terms of infrastructure orchestration: “We started with orchestrating hypervisors via OpenStack, then orchestrating containers with Kubernetes, and now we’re orchestrating clusters for AI workloads. It’s a natural progression built on our foundational expertise.”
This evolution has positioned Mirantis perfectly for the AI era. As Freedland states, “Everything we’ve done to date was a warm-up for the world of AI.” By anchoring their solutions in open standards like Apache 2.0 licensing and contributing to the Linux Foundation and OpenInfra Foundation, Mirantis ensures that the infrastructure for AI is not only robust, but also future-proof.
The Strategic Alignment Between Mirantis and Gcore
The partnership represents a natural alignment of complementary capabilities. While Gcore excels in global edge delivery and public cloud AI services, Mirantis brings decades of expertise in enterprise infrastructure orchestration and private cloud deployments.
Freedland described their approach: “We lifted technology that Gcore created on their infrastructure, made it into software, integrated it with k0rdent—our orchestration platform—and now we’re delivering it to enterprise customers.”
This collaboration addresses a critical market gap: enterprises require the same AI capabilities available in public clouds but deployed in private, sovereign environments where their sensitive data resides. The arrangement extends beyond typical vendor partnerships, with Gcore’s technology being packaged as software that enterprises can deploy on-premises—facilitated by Mirantis’ orchestration capabilities.
Technical Implementation: Simplifying AI Deployment for Enterprises
The joint solution focuses on simplifying the deployment of AI inference workloads—addressing the gap between data scientists who develop models and the operational requirements of running those models at scale.
“ML engineers are not infrastructure operational engineers,” Vayner emphasizes. “They don’t focus on Kubernetes, Helm charts, or custom resources. Our solution provides a serverless platform where you don’t need to think about Kubernetes at all.”
The platform offers several technical advantages:
- Simplified Model Deployment: Native UI, API, SDKs, and Terraform support for deployment without requiring Kubernetes expertise
- Pre-packaged Model Catalog: Open source models available for immediate deployment
- Production-Ready Scaling: Enterprise-grade infrastructure that can scale from proof-of-concept to production
- Hybrid Deployment Options: Support for private, on-premises, and hybrid cloud environments
- Low-Latency Inference: Distributed architecture that can run models close to data sources
For enterprises implementing AI, this solution removes significant barriers to deployment while maintaining necessary controls over data location and sovereignty.
Addressing the Emerging Inference Workload Challenge
Both companies recognize that inference—not just training—represents the future of AI workloads at scale. “When AI applications hit, the one massive, biggest workload that humanity would have ever seen is going to be an inferencing workload,” Freedland explains. “The nature of the AI workload requires it to run where data is stored or generated—the compute has to come to that data.”
This reality drives the need for highly distributed, yet centrally managed, AI infrastructure. The partnership has already seen implementation success, with Netherlands-based sovereign cloud provider Nebul deploying the solution and expanding operations into Germany.
Contributing to Open Source and Industry Standards
The partnership extends beyond commercial solutions, with both companies actively contributing to emerging standards for AI infrastructure.
“We’re building on the shoulders of previous technologies that are essential to making AI work,” explains Freedland. “There’s a huge need now for standards to emerge around the AI stack—from infrastructure through application layers.”
Both companies are engaged with the Linux Foundation and OpenInfra Foundation regarding AI standards development. Their commitment to open source principles is evident in their approach: “Everything we build is Apache 2 and open source first, because we want the community to come together,” notes Freedland.
Strategic Expansion and Future Directions
The Mirantis-Gcore partnership represents just one component of broader strategic initiatives. Gcore recently announced an agreement with Northern Data to provide an “intelligence delivery network” leveraging 35,000 GPUs across Europe.
“Our ultimate goal is to democratize access to GPU infrastructure for training, fine-tuning, and inference,” Vayner explains. “We aim to deliver our platform and open source standards to make GPU environments optimized for cost and easy to use for ML engineers.”
Implications for Enterprise AI Implementation
For enterprises implementing AI capabilities, this partnership addresses several critical challenges:
- Infrastructure Complexity: Abstracting Kubernetes complexity without sacrificing capability
- Data Sovereignty: Enabling AI workloads in private environments where data must remain
- Operational Efficiency: Simplifying the path from model development to production deployment
- Future-Proofing: Building on open standards that will evolve with the AI ecosystem
As organizations move beyond AI experimentation toward production implementations, solutions that bridge the gap between data science and operations will prove increasingly valuable.
Wrap-up
The Mirantis-Gcore partnership exemplifies how infrastructure providers are evolving to meet the demands of AI workloads. By combining edge networking expertise with enterprise-grade orchestration capabilities, they’re addressing the unique challenges of deploying inference at scale.
As Freedland notes, “There is this gold rush happening, but you have to be able to sell picks and shovels—and make sure they actually work everywhere.”
For enterprises navigating the complexities of AI implementation, such partnerships provide critical infrastructure components that simplify deployment while maintaining necessary control over data and models—ultimately accelerating the path to AI value realization.
Guests: Alex Freedland | Seva Vayner
Companies: Mirantis | Gcore
Show: KubeStruck





