Guest: Lukas Gentele (LinkedIn)
Company: vCluster Labs
Show Name: KubeStruck
Topics: Kubernetes, Cloud Native, AI Infrastructure
Enterprises running Kubernetes on shared infrastructure face noisy neighbors, limited isolation, and complex operations. What if you could run fully isolated virtual clusters directly on VMs or bare metal—without a host cluster? That’s exactly what vCluster Labs is enabling with vCluster Standalone.
In this episode, Lukas Gentele, Co-Founder and CEO of vCluster Labs, to unpack how vCluster Standalone simplifies cluster bootstrapping, strengthens isolation, and supports GPU workloads for AI and hybrid environments. Gentele explains why vCluster Standalone is ideal for private clouds, AI supercomputers, and Greenfield deployments where no host cluster exists.
Here is the edited interview:
Swapnil Bhartiya: Running Kubernetes clusters on shared infrastructure often comes with trade-offs—noisy neighbors, limited isolation, and operational complexity. Enterprises want stronger tenancy models without giving up Kubernetes agility. What if you could run fully isolated virtual clusters with their own control planes directly on VMs or bare metal—without depending on a host cluster? That’s what vCluster Labs is delivering with vCluster Standalone. Lukas, great to have you back on the show.
Lukas Gentele: Great to be back again so soon! Yes, we’ve had a lot to announce this year.
Swapnil Bhartiya: We’ve been covering vCluster Labs since your Loft Labs days. You’ve consistently pushed the boundaries of Kubernetes tenancy—private nodes, autoscaling, and now vCluster Standalone. Walk us through what this is and how it extends the evolution of vCluster.
Lukas Gentele: When we launched vCluster, it was designed to run on top of an existing Kubernetes cluster—EKS, Rancher, OpenShift, you name it. It solved multi-tenancy for those environments by letting teams create isolated virtual clusters. But node-level isolation was still tricky since workloads shared the same underlying nodes. We introduced features like private nodes and dedicated node selectors to address that.
With vCluster Standalone, we’re going a step further. Many users building new estates—like GPU supercomputers or private clouds—don’t yet have a host cluster. They asked: “Why do I need to spin up a K3s or Rancher cluster just to start vCluster?” Standalone solves that. You can bootstrap a new cluster directly on bare metal or VMs with a single command, just like K3s. It spins up the control plane, lets you scale to HA, and then add worker nodes manually or automatically. It’s designed to solve the Cluster-One problem: how to create the very first cluster to launch your vClusters.
Swapnil Bhartiya: Traditional vCluster runs inside a host cluster’s namespace, but Standalone breaks that dependency. What new possibilities does this unlock in terms of isolation and flexibility?
Lukas Gentele: The biggest advantage is unified support and consistency. With Standalone, your initial cluster is powered by the same technology and team that supports your tenant clusters. Previously, if your host was OpenShift or EKS, we couldn’t directly help troubleshoot that base layer. Now, with Standalone, you can run everything under the same roof—no third-party dependency. It’s essentially vClusters all the way down, giving organizations full control and vendor independence.
Swapnil Bhartiya: You’ve worked with partners and beta testers. Where do you see the strongest use cases for vCluster Standalone, and when should organizations consider using it?
Lukas Gentele: If you’re on a hyperscaler like AWS, GCP, or Azure using managed Kubernetes services such as EKS, you’re already in a good spot—keep using those. But if you’re in a regional cloud, a private data center, or an AI supercomputer setup without a managed Kubernetes service, that’s where Standalone shines. It’s ideal for bootstrapping environments that need a Kubernetes-like experience without a host cluster dependency.
Swapnil Bhartiya: From a business perspective, how does Standalone reduce complexity, cost, or risk compared to existing tenancy options?
Lukas Gentele: Everything we offer for vCluster—24/7 support, SLAs—extends to Standalone. Interestingly, it’s a step back to Kubernetes’ roots: running control planes directly on VMs or bare metal. We’re not telling users to abandon containerized control planes, but for organizations starting from zero, this provides a simple, supported way to spin up their first cluster. It’s ideal when you want the same level of expertise from the team behind your core tenancy layer.
Swapnil Bhartiya: Let’s talk about AI. GPU workloads are exploding across AI and ML. Is vCluster Standalone especially suited for GPU use cases?
Lukas Gentele: Absolutely. vCluster is becoming one of the best Kubernetes options for GPU workloads. You can manage and share GPU estates using vCluster and vNode, and Standalone serves as the bootstrap point for GPU environments—like NVIDIA SuperPods or AI supercomputers. Once you spin up Standalone, you can deploy multiple vClusters with private nodes to dynamically allocate GPU resources. This setup lets enterprises share GPU clusters securely across multiple teams or business units—a full multi-tenant GPU cloud.
Swapnil Bhartiya: Have you already seen customers using vCluster Standalone or vCluster for AI workloads?
Lukas Gentele: Yes, we’ve seen vCluster heavily used for training and inference workloads. Standalone itself is primarily used to run those tenant clusters that connect to GPU resources. Think of Standalone as the static base layer—your control plane. On top of it, you deploy multiple dynamic vClusters that handle actual workload scheduling and GPU operations.
Swapnil Bhartiya: What does this signal for the broader future of Kubernetes tenancy? Could virtual clusters eventually replace host clusters entirely?
Lukas Gentele: I don’t think there’ll ever be one Kubernetes distribution to rule them all. The future is heterogeneous—public clouds, edge, bare metal, vSphere, Neo clouds—each with different needs. vCluster aims to be the abstraction layer across them. Think of it like Terraform: you could use CloudFormation for AWS, but Terraform gave you a single, cross-cloud language. Similarly, vCluster YAML can become that unifying language for multi-tenancy across environments.
Swapnil Bhartiya: What does deployment look like? How easy is it to get started, and how involved does your team get?
Lukas Gentele: It’s as simple as installing K3s—literally a single command. The installer sets up everything: containerd, the Kubernetes control plane, and vCluster configuration. Add a couple more commands, and you can make it highly available or connect it to the vCluster platform for automated worker provisioning. For larger enterprises, we offer hands-on architectural reviews to design the right tenancy model and operational strategy.
Swapnil Bhartiya: Many organizations already run hybrid environments. How does vCluster Standalone integrate with existing infrastructure?
Lukas Gentele: It’s very flexible. You can mix and match environments—AWS, private cloud, even hybrid setups. Using our private nodes and auto-nodes features, you can securely connect nodes from anywhere via a built-in VPN for pod-to-pod traffic. The goal is seamless hybrid operation with strong network isolation.
Swapnil Bhartiya: Lukas, it’s clear that vCluster Standalone marks a new chapter for Kubernetes tenancy—offering stronger isolation, GPU readiness, and hybrid flexibility. Thank you for breaking it all down.
Lukas Gentele: Thank you! Always great talking with you. See you at KubeCon.





