The conversation around AI typically focuses on model performance, but according to Steve Westmoreland, CIO of Cachengo, the real bottleneck isn’t your model—it’s the infrastructure it runs on. This insight drives Cachengo’s recent platinum membership with the OpenInfra Foundation, signaling a strategic shift toward decentralized AI computing solutions.
The Physics Problem in AI Infrastructure
Westmoreland explains the fundamental challenge: “If you have to bring that back to the cloud to make every decision, physics becomes your enemy.” This reality becomes critical in applications like New Jersey Transit’s innovation center, where Cachengo processes video and sensor feeds directly on buses to detect weapons and safety incidents without relying on wireless connections that can’t handle high-density video streams.
The solution involves deploying AI inference models directly at the edge. “Our AI components—our inference engine—running on a piece of Cachengo hardware and software, are analyzing those videos using AI models that detect weapons and events such as someone falling down,” Westmoreland notes. These models can be updated remotely, including facial recognition capabilities, without requiring constant cloud connectivity.
Beyond Traditional Data Center Deployment
Cachengo’s approach extends far beyond typical enterprise deployments. Their hardware runs in non-traditional environments—from transit buses to remote solar farms—where traditional cloud connectivity is impractical or expensive. Westmoreland describes working with organizations managing large solar installations in remote locations: “You care about what they’re doing. You want to store what they’re doing. But you don’t necessarily want to keep it or store it in the cloud. You want to process it in near real time.”
This distributed approach includes predictive maintenance capabilities, where AI models analyze audio patterns to detect bearing failures before they occur—demonstrating practical applications beyond the typical AI use cases.
Open Source Strategy and Community Integration
Jimmy McArthur, Director of Business Development at OpenInfra, emphasizes the significance of this partnership: “This is exactly the kind of Platinum Member we want at the OpenInfra Foundation, because they’re engaging with the community — they’re coming in with eyes open.”
Cachengo’s integration roadmap focuses on OpenStack components including Nova, Swift, Ironic, and Neutron projects. The company plans to present at the OpenInfra Summit in Europe, sharing code that enables geographically dispersed storage components for decentralized and distributed processing.
Market Implications for AI Infrastructure
The partnership represents broader market trends. As McArthur explains, “OpenStack is the de facto software for running AI workloads,” but Cachengo’s approach to inference models at the edge represents a new application area that other organizations haven’t explored with OpenStack.
This collaboration comes as the OpenInfra Foundation joins the Linux Foundation, creating a larger ecosystem for infrastructure innovation. The addition of Cachengo as the third platinum member in the past year demonstrates growing momentum in open source infrastructure solutions.
Looking Forward
Westmoreland’s 35-year journey through open source—from VA Linux to the early days of SourceForge—provides perspective on infrastructure evolution. His surprise at OpenStack’s continued growth and stability reinforces the platform’s role as foundational technology supporting modern containerization and Kubernetes workloads.
The partnership between Cachengo and OpenInfra represents more than a business relationship—it’s a signal that open source infrastructure is evolving to meet the massive demands of AI workloads where traditional centralized approaches fail.
Edited Transcript
Swapnil Bhartiya: What if I tell you that the real AI bottleneck is not your model, but the infrastructure it runs on? Welcome to TFiR on AI. I’m your Swapnil Bhartiya, Swapnil Bhartiya. In a major move, the OpenInfra Foundation has welcomed Cachengo, a rising AI hardware innovator, as its newest platinum member—a move that signals a major shift in how open source infrastructure is evolving to meet AI’s demanding requirements. Joining us today are Steve Westmoreland, Chief Information Officer at Cachengo, and Jimmy McArthur, Director of Business Development at OpenInfra. Today, we’re going to unpack what this partnership means for the future of AI open infrastructure, Cachengo’s innovative approach to hardware optimization, and how this collaboration could reshape the landscape for developers and enterprises building AI solutions. Steve, Jimmy, it’s good to have you both on the show.
Jimmy McArthur: Nice to be here. Thanks for having us.
Swapnil Bhartiya: Steve, I would love to know a bit about the company and about yourself, so folks know what Cachengo is all about.
Steve Westmoreland: I’ve been in business for about 35 years now. I really entered the open source area around the VA Linux days. I’ve been very lucky career-wise to bounce back and forth in organizations that had high compliance requirements and were going through some type of cultural shift. When I first entered this arena, VA Linux was trying to get open source adopted into industries like financial services. We were doing a lot of foundational work, working with some of the early kernel developers—Linus was consulting, Mad Dog, Eric Raymond, some of the original open source advocates. That’s where we built out some of the things that were precursors to what we use today: SourceForge, a lot of the open source licensing that was being developed. The Open Source Development Lab was working at Oregon University, and those became the Linux Foundation and moved forward. I did a stint at the Linux Foundation when OpenStack was very active, along with Cloud Foundry and the Cloud Native Compute Foundation.
At Cachengo, some of those foundational people in open source came together several years ago to develop what was initially a storage profile—the concept was to put compute power directly on the storage, be able to use it remotely and decentralized. Everything in our company is open source already, but we’ve really become a pioneer in decentralized distributed processing. We were doing real-time facial recognition seven years ago at remote locations without having to go back to the cloud. That’s what Cachengo is about—we’re pioneers in decentralizing that AI compute piece, storage, and high-density storage like video. Physics gets involved because if you have to bring that back to the cloud to make every decision, physics becomes your enemy.
Swapnil Bhartiya: Can you talk about what role Cachengo is playing in the AI space and what AI means for Cachengo? What made you join OpenInfra at the platinum level?
Steve Westmoreland: Our priorities are basically driving the adoption of open source. All of our software that’s been developed internally has already been open source, so the next step is increasing our footprint. Our products run in data center environments, but they also run on transit buses, in remote locations, in non-traditional data center footprints—like making a building able to detect sensors and become a smart building without having to transit to the cloud every time.
Our goal with the OpenInfra Foundation is in our DNA. You go to your friends first when you’re doing something. We saw that if we want to continue to accelerate adoption of our product and open source footprint, the smartest thing would be to go to friends and family—our colleagues—and use the good work that’s already been developed. We plan on living in the existing community and helping grow it. In our perfect world, it would be totally seamless. You’d be able to create your AI model, distribute it out to the edge, decentralize it where it doesn’t have to come back to a centralized location. You’d be able to distribute those inference models, make those decisions in real time, and get tremendous advantage through that AI—quicker, faster, better, and reduce your cost over time because you’re no longer transiting back or tying down more costly resources.
Swapnil Bhartiya: Jimmy, since OpenStack’s early days, you’ve reached peak adoption and then things stabilized and matured. Talk about where OpenInfra is, especially after joining the Linux Foundation, and what Cachengo’s arrival as a platinum member means for the foundation.
Jimmy McArthur: Steve said it best—collaboration is strength in open source. The OpenInfra Foundation recently joined forces with the Linux Foundation, which I see as a massive advantage. We’ve already seen through our open inference blueprint that we published last year that Linux, Kubernetes, and OpenStack are already working together alongside things like Ceph in production. Being able to work more closely with the Linux Foundation is great for all of our communities.
Right now, the power of open source is necessary to open up new markets and for things like digital sovereignty, but also for helping promote keeping work in-country for a lot of developing nations. With Cachengo, they’re the third platinum member we’ve brought on in the last year, which shows dramatic growth with the OpenInfra Foundation and particularly with OpenStack. It shows the strength and flexibility of the software that Cachengo will be relying on—OpenStack powering their architecture in new and exciting ways.
We certainly went through the Gartner hype curve in OpenStack and hit the trough of disillusionment a few years ago, but we’re on the way back up. This shows the flexibility of OpenStack to power solutions at the edge—novel things that three or four years ago we talked about theoretically, but here Cachengo is actually doing it in practice using our software.
Swapnil Bhartiya: Steve, are you joining the foundation or upgrading your membership?
Steve Westmoreland: We’re still a smaller company, so we’re not a behemoth but we’re growing very fast. We actually joined OpenInfra from scratch, and it was an added bonus that OpenInfra was moving over to the Linux Foundation. Joining as a platinum member—we’re not funded in hundreds of billions of dollars, so it’s a situation where we put our money where our mouth is to support the projects and the commitment they’ve made to the cloud native and compute industry.
I was deeply involved in some of these things 10 years ago, and I was excited to find out that OpenStack and OpenInfra were increasing rather than decreasing. I had thought originally that Cloud Foundry was going to suck all the oxygen out of the room for OpenStack, and then when we created the Kubernetes foundation and saw the tremendous upsurge in containerization, I thought maybe these projects would live their life cycle. But the awesome part is that the underpinnings of a lot of Kubernetes work is OpenStack and OpenInfra components. To see OpenStack and the work that the OpenInfra Foundation has done become that stable underpinning that supports Istio and other products—those core technology stacks have been built on it—made it an easier decision for us.
Swapnil Bhartiya: What kind of signal does this membership send to the market?
Jimmy McArthur: I think it shows the health of the open infrastructure community and software. The fact that we’re backed by the Linux Foundation shows that the largest software foundation in the world puts weight behind the work we’re doing. Along with Rackspace and Orchestral earlier this year, we’re seeing growth in every region of the world, including the US. Cachengo sends the signal that this isn’t just a Europe digital sovereignty play—the open source software world is growing right here in the US, and hardware companies are using it to power their architecture.
Swapnil Bhartiya: Steve, what’s going to be your priority of engagement with these communities over the next six months?
Steve Westmoreland: The short-term objectives are active right now. We had conversations as recently as last evening about what needs to happen to our technology stack to bridge it and make it fully OpenStack compatible. That’s going to fall initially in the realms of Nova, Swift, Ironic, and the Neutron projects—those are the low-hanging fruit that we have to assess and modify accordingly. We have team members assigned to those things currently.
Longer term, those initial pieces address scaling. We have extremely high-density nodes—our smallest component is smaller than a phone, and we’re talking about thousands of servers in a rack. So scaling, ease of deployment, and ecosystem support are all things that come with the OpenStack and OpenInfra components because they’re already doing that in the ecosystem.
We’ll continue to look at Keystone, Horizon, Cinder—the usual suspects. One of our internal projects that we’re actively working on right now has a presentation request at the OpenInfra Summit in Europe, where we’re expanding and providing the community some of our resources and code that allows geographically dispersed storage components to make decentralized and distributed processing possible.
Jimmy McArthur: If I could follow up on that—you asked what kind of message does Cachengo joining as a platinum member send. Steve frames it perfectly. This is exactly the kind of platinum member we want at the OpenInfra Foundation because they’re engaging with the community. They’re coming in eyes open, understanding that you have to put in work to get known in the open source community, but they also want to give back to the community and not just use the software. This is a perfect marriage.
Swapnil Bhartiya: What are some of the real bottlenecks for AI that Cachengo is addressing, and how will joining OpenInfra help the whole ecosystem?
Steve Westmoreland: The best way to answer that is to talk about one of our use cases. We’ve done work with New Jersey Transit as part of their innovation center. They have buses that move around that don’t have a cable attached. Typically, wireless is going to be a lower-speed connection than dark fiber. So there’s a Cachengo box in those transit pieces processing video and sensor feeds.
In a perfect world, if I’m trying to detect something, I want to have the highest video profile I can get. Unfortunately, video is high-density information, so sending that up a wire is difficult. Our AI components and inference engine, running on Cachengo hardware and software, are looking at those videos. They have AI models that detect weapons and events like somebody falling down. These models can be updated—if we’re looking for a particular person, we could do facial recognition. Various inference engines can be loaded down to that distributed, decentralized module.
We’ve decoupled that from having to have a dark cable attached to every bus. It can dump that video once it comes back to the warehouse because Cachengo equipment has high-density storage as well as the compute engine. This type of AI example is now being expanded to process audio files. When bearings start to go bad on tires, they make a particular sound that can be analyzed.
I talked to an organization this week that has big solar farms. They’re by nature very remote, and you care about what they’re doing. You want to store what they’re doing, but you don’t necessarily want to keep it and store it in a cloud. You want to process it near real time. We’re adding the capability to our sets to process in partnership with Intel to have access to the Gaudi processors to be able to do the modeling portion as well in near real time. I don’t have to buy time on the most expensive GPU processor in the known universe—I can buy the moderately priced GPU processor and build my models and distribute my inference models at the same time.
Jimmy McArthur: Until I met Cachengo, I didn’t realize that you could use AI for anything except fake political cartoons. But to Steve’s point, this really shows what we’ve been saying for a long time at OpenInfra—that OpenStack is the de facto software for running AI workloads. We’ve seen it in lots of other areas, but this is a new and exciting area. These inference models, as far as I know, other organizations aren’t using OpenStack in that way, so it helps further that story for us and shine a spotlight on the cool work they’re doing at Cachengo.
Swapnil Bhartiya: Steve, Jimmy, thank you so much for joining us today and sharing these insights into what could be a game-changing partnership for the AI infrastructure space. It’s clear that Cachengo’s joining OpenInfra as a platinum member represents more than just a business relationship—it’s a signal that open source infrastructure is evolving to meet the massive demands of AI workloads.
Steve Westmoreland: Thank you. Appreciate your time.
Jimmy McArthur: Good to see you, Swapnil.





