AI infrastructure is scaling to hundreds of thousands of automatically generated queries per second. But traditional vector search methods can’t keep up—inaccurate search results drive compute waste and hallucinations across enterprise AI stacks. This isn’t just a performance problem. It’s an architectural crisis impacting every organization deploying agentic AI at scale.
OpenSearch has evolved from an AWS fork into critical AI infrastructure powering production deployments at Changi Airport, Atlassian, and NVIDIA’s agentic AI platform. Since joining the Linux Foundation 18 months ago, the project has doubled to 1.4 billion downloads, expanded to over 400 contributing companies, and now leads in Gigaom Research’s observability platform rankings.
The Guest: Bianca Lewis, Executive Director at OpenSearch Software Foundation
Key Takeaways
- OpenSearch downloads doubled from 700 million to 1.4 billion since Linux Foundation transition, with over 400 companies contributing
- Hybrid search architecture combines lexical and semantic search to prevent AI hallucinations and reduce compute costs at scale
- AI-native observability suite monitors agentic AI agents through full lifecycle, tracking API calls and queries with integrated telemetry from OpenTelemetry and Prometheus
- Vendor-neutral governance under Linux Foundation drives enterprise adoption—Changi Airport and Atlassian case studies demonstrate production maturity
- OpenSearch announced as foundational AI infrastructure layer in NVIDIA GTC keynote for agentic AI platforms
***
In this exclusive interview with Swapnil Bhartiya at TFiR, Bianca Lewis, Executive Director at OpenSearch Software Foundation, discusses how OpenSearch has transformed from an Elasticsearch fork into critical AI infrastructure, the technical innovations enabling hybrid search to prevent hallucinations, and why vendor-neutral governance accelerates enterprise adoption.
From Fork to Foundation: OpenSearch’s Journey Under Linux Foundation
OpenSearch began as an Elasticsearch fork by AWS in 2021, but its trajectory changed dramatically when it transitioned to the Linux Foundation 18 months ago. The vendor-neutral governance model provided credibility that transformed community participation and enterprise adoption. The project now processes 1.4 billion downloads and attracts contributions from over 400 companies.
Q: What momentum have you seen since OpenSearch joined the Linux Foundation?
Bianca Lewis: “It was originally forked by AWS and a few founding members, but about a year and a half ago, when it was donated to the Linux Foundation, the OpenSearch project really began to innovate, grow, and build an amazing community because it is objectively governed by the Linux Foundation. Under the Open Source Software Foundation, it gained the credibility of not being vendor-owned and of being part of a nonprofit organization working for the good of the community. We have grown from 700 million downloads to 1.4 billion downloads. We now have over 400 companies contributing, 1.5 million page visits per month, and our membership is also growing steadily. We’ve added three or four new members in the last two months alone.”
The Linux Foundation transition provided more than credibility—it enabled technical independence and accelerated innovation cycles. OpenSearch now ships releases every eight weeks, currently on version 3.5, with enterprise features and integrations that extend far beyond its search origins.
Why Hybrid Search Architecture Prevents AI Hallucinations
Traditional vector search methods fail at the scale required by agentic AI. When AI agents generate hundreds of thousands of queries per second, inaccurate search results compound exponentially, triggering hallucinations across retrieval-augmented generation (RAG) layers and driving massive compute waste. OpenSearch addresses this with hybrid search architecture that combines lexical filtering with semantic search.
Q: What makes OpenSearch foundational for AI infrastructure, especially considering NVIDIA featured it at GTC?
Bianca Lewis: “OpenSearch operates on multiple levels. We can build what used to be our observability, search, and security monitoring on a centralized AI infrastructure. On top of that, we’re leading the market in modernizing search itself, which is critical to all of this. When we’re dealing with individual queries, the impact on compute is limited, but when we scale to AI agents generating possibly hundreds of thousands of queries per second, we can’t afford inaccurate results from vector searches that return many irrelevant or adjacent results.”
Q: How does OpenSearch’s approach to search prevent these problems?
Bianca Lewis: “What OpenSearch has done is modernize search using hybrid search. We first filter with lexical search, which provides the context needed for semantic search. This helps prevent the AI infrastructure and RAG layer from hallucinating, enabling more accurate results and a more cost-efficient infrastructure. We can also plug in LLMs of your choice—we’re agnostic to all of that. Altogether, this combines into a very powerful, AI-native data layer.”
This hybrid approach addresses a fundamental architectural problem: semantic search alone produces too many adjacent but irrelevant results, forcing downstream AI models to process noise. By applying lexical filtering first, OpenSearch provides semantic models with relevant context, dramatically improving accuracy while reducing compute overhead.
AI Monitoring AI: Observability Suite for Agentic Workflows
Monitoring agentic AI presents unique challenges. Unlike traditional applications where humans trigger discrete workflows, AI agents generate cascading chains of automated queries and API calls. OpenSearch’s observability suite provides unified visibility across traces, logs, and metrics—including integrated Prometheus support—enabling teams to monitor AI agents through their complete lifecycle.
Q: What advancements has OpenSearch made in observability for AI workloads?
Bianca Lewis: “We’re already on version 3.5, as our release cycles run every eight weeks. We’ve made significant advancements with these AI agents, enabling us to configure them in production. In our new Observability suite, which has not yet been officially released, we will be able to monitor agentic AI systems throughout their full lifecycle, including the APIs they call and the queries they run. We are essentially creating an additional telemetry layer, where AI is monitoring AI as well.”
Q: How does OpenSearch integrate different telemetry sources?
Bianca Lewis: “We’ve plugged in different telemetries to give context to the data, which specifically means that we can see traces and logs in the same user interface now in OpenSearch, and we’ve recently added Prometheus into that as well. OpenSearch traditionally has been challenging to scale metrics, now we can see Prometheus metrics as well in the same user interface.”
This unified telemetry approach addresses a critical gap in AI infrastructure monitoring. Teams no longer need to correlate signals across separate tools—traces, logs, and metrics converge in OpenSearch dashboards, enabling faster troubleshooting and deeper insight into agent behavior.
Enterprise Production Deployments: Changi Airport and Atlassian
OpenSearch maturity extends beyond technical capabilities into verified enterprise production deployments. Recent case studies demonstrate the platform powering mission-critical infrastructure at global scale, from airport operations to software development platforms.
Q: How mature is OpenSearch for enterprise production use?
Bianca Lewis: “Technologically, it’s very mature, especially considering that we’re in a very fast-moving environment. The advances we’ve made in both controlling and optimizing AI search, modernizing search, and the ability to trace and understand what agentic agents are doing on our infrastructure is probably far ahead of any other platform today. We now have enterprise features for companies. We’re going to be announcing Cornerstone enterprise features pretty soon.”
Q: Can you share examples of enterprise deployments?
Bianca Lewis: “We just released a very interesting case study from Changi Airport, the largest airport in the world in Singapore, using OpenSearch for search across the whole retail sector, the airport functioning, ticketing, parking, flight schedules. We also released another case study showing high enterprise adoption from Atlassian, which of course was purchased by RPM, a Premier member. So we’ve got social proof, we’ve got technical proof, and now we’ve got enterprise mechanisms to be announced in a couple of weeks.”
The Changi Airport deployment demonstrates OpenSearch handling diverse, real-time search workloads across critical infrastructure. The Atlassian case study validates the platform for software development ecosystems at massive scale, particularly relevant given RPM’s acquisition and Premier membership in the OpenSearch Foundation.
Cross-Pollination with Linux Foundation AI Projects
OpenSearch doesn’t operate in isolation within the Linux Foundation ecosystem. Integration with OpenTelemetry for telemetry ingestion, Kubernetes for orchestration, and the Agentic AI Foundation for agent frameworks creates a comprehensive open source AI infrastructure stack. This cross-project collaboration accelerates innovation while maintaining vendor neutrality.
Q: What other open source projects does OpenSearch collaborate with?
Bianca Lewis: “We’ve got OpenTelemetry, which is getting all sorts of different telemetry into Data Prepper, which is the ingestion layer of OpenSearch to provide context. We’ve also got a lot of CNCF technologies like Kubernetes. We’ve also got the new Agentic AI Foundation, which is a very key adjacent project of the Linux Foundation. I think one of the things that we do better this year is integrating between our different projects in terms of cooperation and ideas and integrations. In the next releases of OpenSearch, you’re going to see even more and better integration.”
Q: How do you see AI evolving under Linux Foundation governance?
Bianca Lewis: “We’re at a precipice now where AI is scaling to such an extent that putting it in the hands of any profit-related vendor—for me, it’s logical that open source under the Linux Foundation will lead the innovation around AI with all of these different foundations. The open source model itself is a protected layer. As we scale, we’ve got more eyes in the community. We’ve got more companies using the open source layers with better data sovereignty, because they own those projects at their company. Compliance becomes easier as well.”
This vision mirrors the trajectory of Linux and Kubernetes—foundational technologies governed by vendor-neutral foundations creating level playing fields for innovation. As AI infrastructure becomes as critical as operating systems and orchestration layers, open source governance models provide the trust and collaboration frameworks necessary for broad enterprise adoption.





