Loft Labs continues to shape the future of Kubernetes infrastructure with the release of vCluster version 0.26 — a release that directly targets the needs of AI infrastructure teams. Saiyam Pathak, Principal Developer Advocate at Loft Labs, joined TFiR to unpack what’s new and why it matters.
Built for GPU-Aware AI Workloads
“As the AI wave takes over infrastructure conversations, teams need virtual clusters that can support GPU-specific workloads and complex scheduling patterns,” said Pathak. “Before vCluster 0.26, you couldn’t run multiple schedulers within a virtual cluster. Now you can.”
That change alone is significant. AI/ML teams often require custom scheduling logic — for example, GPU-first workload placement, or the ability to queue large inference jobs. Supporting multiple schedulers within a single virtual cluster means platform teams no longer need to spin up separate clusters for each use case.
Namespace Syncing: Bridging Virtual and Host Clusters
Another addition in v0.26 is namespace syncing. This gives teams fine-grained control over how namespace data is synchronized between virtual clusters and the host Kubernetes cluster.
Pathak explained: “There are cases where specific namespace patterns need to be synced back to the host — and now, with 0.26, you can hardcode those patterns.”
This feature is particularly valuable in regulated environments, where auditing, observability, or governance tooling may still operate at the host level but require visibility into tenant namespaces.
Loft Labs is also making a cultural shift toward transparency. Alongside the v0.26 release, they launched a public roadmap at vcluster.com/launch, outlining planned releases for August through October.
“You’ll see the next set of features we’re working on — and it’s a packed schedule,” said Pathak. “We’re moving quickly, and we want the community to know what’s coming.”
The roadmap includes upcoming innovations like Private Nodes, smarter bare-metal autoscaling, and the much-anticipated standalone vClusters — all designed to expand vCluster’s flexibility for different tenant isolation models.
At its core, vCluster is about solving a familiar platform engineering challenge: how to give teams control without over-provisioning infrastructure.
Instead of allocating entire clusters to each AI or ML team — an expensive and often unsustainable model — vCluster enables organizations to provision isolated virtual environments inside a single Kubernetes cluster. With support for GPU nodes, shared or dedicated resource pools, and now multi-scheduler capability, vCluster is becoming the foundation for modern internal developer platforms.
Looking Ahead
This release lays the groundwork for a larger vision: making vCluster the single tool that spans the entire Kubernetes multi-tenancy spectrum.
With AI workloads pushing infrastructure to new limits, tools like vCluster that offer efficient, scalable, and secure multi-tenancy will define the next generation of platform engineering.
Explore the roadmap and follow upcoming features at vcluster.com/launch.





