AI Infrastructure

Why Ray Joining the PyTorch Foundation is a Defining Moment for Open Source AI | Luca Antiga, Lightning AI

0

Guest: Luca Antiga (LinkedIn)
Foundation/Company: PyTorch Foundation | Lightning AI
Show Name: The Source
Topic: Open Source

The open source AI ecosystem is rapidly evolving — and the latest move could reshape its future. Ray, the widely used distributed compute framework that powers many modern AI workloads, has officially joined the PyTorch Foundation. This partnership represents more than organizational realignment; it’s a decisive step toward building a unified, vendor-neutral foundation for the AI stack. In this conversation, Luca Antiga, Head of the Technical Advisory Council for the PyTorch Foundation and CTO at Lightning AI, explains how this collaboration strengthens open source governance and accelerates innovation across the AI infrastructure landscape.

Ray has become one of the most important technologies in distributed computing, enabling developers to scale AI training and inference workloads seamlessly across clusters and machines. As Antiga describes, “Ray is essentially a compute engine — a way to distribute computations across different machines, allowing orchestration of workloads at scale.” From reinforcement learning systems to large-scale inference pipelines, Ray has become a critical part of AI’s plumbing.

The story of Ray’s transition begins with Anyscale, the company that originally built and maintained the project. Recognizing that open source adoption thrives under neutral governance, Anyscale decided to transfer Ray’s ownership to the PyTorch Foundation. “The PyTorch Foundation is a vendor-neutral space,” Antiga notes. “By placing Ray under its umbrella, the goal is to broaden adoption and make it easier for companies to contribute without feeling like they’re advancing another vendor’s commercial agenda.”

This neutrality is key to open collaboration. When foundational technologies are owned by a community rather than a single company, enterprises are more inclined to contribute. “It’s about leveling up the floor where everyone plays,” Antiga says. “When contributions benefit the whole ecosystem, innovation becomes shared, not siloed.”

The PyTorch Foundation itself has evolved beyond being a home for the PyTorch framework. It’s now an umbrella foundation, welcoming foundational projects like Ray and the LLM inference engine into a cohesive ecosystem. As Antiga explains, “PyTorch can be seen as the power plant, and Ray as the distribution network — both are essential to build large-scale systems.” The combination allows AI practitioners to connect training, inference, and orchestration into a unified, open infrastructure stack.

Ray also fills a crucial gap above PyTorch’s native distributed layer. While PyTorch supports distributed training through Torch Distributed, Ray extends those capabilities to broader orchestration tasks — managing datasets, coordinating workflows, and integrating reinforcement learning loops. “It’s a layer above,” Antiga explains. “Ray handles complex orchestration that goes beyond single-model distribution, like coordinating multiple processes or models in dynamic systems.”

Under the PyTorch Foundation, projects like Ray gain access to shared best practices and governance frameworks, as well as technical support such as CI pipelines and compute resources donated by hyperscalers. The Technical Advisory Council (TAC), chaired by Antiga, helps ensure that these projects remain stable, interoperable, and aligned with community priorities. “Our role is not to dictate roadmaps,” he clarifies. “We facilitate collaboration, identify challenges, and ensure the ecosystem evolves cohesively.”

Beyond infrastructure, this move speaks to the broader question of open source AI’s competitiveness in an increasingly closed ecosystem. While proprietary AI models often dominate headlines, open source frameworks and tooling continue to push the industry forward. “Open source is leveling up the entire industry,” Antiga emphasizes. “If developers can do for free what was proprietary six months ago, everyone — open and closed — has to move faster.”

He distinguishes between layers of openness in AI: foundational software (like PyTorch and Ray), ecosystem projects, and open-weight models. True open source, he notes, lies in the foundations — the infrastructure that enables experimentation, deployment, and scale. “What’s powerful is how these open building blocks make AI accessible and sustainable across research and enterprise,” Antiga adds.

For developers already using both Ray and PyTorch, this transition may feel natural. Many organizations have integrated the two for years. What changes now is the pace and scope of innovation. With the projects aligned under one foundation, shared contributions, governance, and integration will accelerate organically. “It’s not about forcing integration,” says Antiga. “It’s about creating an environment where collaboration happens naturally.”

The PyTorch Foundation’s next phase is about growing both foundational projects and the wider ecosystem. The foundation invites independent developers and companies to submit projects that meet its quality standards. “We want to create an ecosystem where participation doesn’t require transferring ownership, but still ensures quality and compatibility,” Antiga explains. This open submission model is designed to help emerging projects gain visibility and credibility while maintaining high technical standards.

Ultimately, bringing Ray under the PyTorch Foundation represents a shift in how AI infrastructure evolves — from siloed innovation to shared progress. It creates a common ground for companies, researchers, and developers to co-build the technologies that will define AI’s next decade.

How to Get Started with vCluster: A Hands-On Guide | Lukas Gentele, vCluster Labs

Previous article

Turning Cyber Intelligence into Action — Insights from Akamai’s Steve Winterfeld

Next article