Guest: Matan Bar-Efrat
Company: Rein Security
Show: The Agentic Enterprise
Topic: AI agents
AI agents are writing code, making business decisions, and connecting systems through MCP architectures—but security teams are flying blind. With AI-generated code exploding and non-deterministic agents taking over critical business processes, the traditional application security playbook is no longer enough.
Matan Bar-Efrat, CEO and Co-Founder of Rein Security, has watched this blind spot grow into one of the most critical gaps in modern AppSec. His company is tackling it head-on by bringing real-time production context to application security—designed specifically for the environments where tomorrow’s risk will actually exist.
The company’s name itself tells part of the story. “Rein—like the reins of a horse—means to contain,” Bar-Efrat explains. “The idea was to have something that means something, that tells part of the story of the company itself. The reins of a horse are really what we’re able to do, which is to contain the threat, protect the companies that we’re working with, and take control.”
But what exactly is the gap that Rein Security is addressing? Despite the explosion of security tools over the past decade, there’s a massive blind spot between gateways and low-level system monitoring. “We have a lot of products around gateways, a lot of products around eBPF,” Bar-Efrat says. “It’s very clear that there is a very big gap in between.”
That gap becomes critical in the world of AI security. With agents powered by frameworks like Claude, LangChain, or Microsoft Foundry proliferating in enterprises, monitoring just the prompt leaves security teams outside the execution context. “The fact that you can only monitor the prompt leaves you without enough understanding of what is the impact of a malicious prompt on your organization,” he explains.
Rein Security provides a visibility layer that captures everything an agent or application does—from the prompt it receives to the API endpoints it queries, the database calls it makes, the tools it uses, and the actions it performs including network connections, binary executions, and file system access. This enables complete inventory and discovery of agents, MCPs, applications, libraries, and APIs, along with vulnerability management.
The timing couldn’t be more critical. AI has fundamentally shifted the advantage to attackers. “These models are just amazing at finding zero-days,” Bar-Efrat notes, referring to recent research from Anthropic. “We used to call zero-days zero-days because once in a while you would find something that nobody knew about. But with AI, I think we should call them ‘every-days.’”
The velocity of code has exploded with AI assistance, and simultaneously, LLMs have become exceptional at finding vulnerabilities and automating reconnaissance and attack processes. Add to this the inherently non-deterministic nature of AI applications—where you don’t necessarily have an expected outcome—and you have a perfect storm.
This is where traditional security approaches fall short. “What you care about is not the response to the prompt,” Bar-Efrat emphasizes. “What you care about is the impact of that prompt or interaction on your environment, on your infrastructure, on your data. And you can only get that if you have the application context—the reasoning and the data that it used.”
The approach also addresses a fundamental shift in how security must operate in the AI era. “The fact that today security is embedded, the fact that we’re at a point where we want to enable developers to experiment—how do you do that if all you’re doing is chasing them with PRs about vulnerabilities?” Bar-Efrat asks.
The answer lies in focusing on what’s actually impactful: which parts of code are actually running, which CVEs and libraries and functions are actually executing, and how that could cause harm based on whether payloads come from external or internal APIs. With a dynamic baseline that learns application behavior, security teams can tell developers, “Do what you want to do. We’ve got your back.”
This becomes especially critical for regulated industries dealing with AI agents that handle customer-facing processes. Bar-Efrat offers a concrete example: “Let’s say you have an AI agent responsible for pricing an insurance contract with a customer. You want to be able to go back and audit it and make sure that it didn’t hallucinate and that the action it made wasn’t harmful in terms of data access or connections.”
The ability to prove that an agent made a specific decision based on specific data and reasoning—at scale and in a way that maintains data sovereignty—is what separates production-aware security from perimeter monitoring.
Looking ahead, Rein Security is doubling down on the agentic world, with support already in production for MCPs, agents, and service-to-service interactions. The focus is increasingly on the identity side and what Bar-Efrat calls “almost like a business logic problem”—understanding the specific context of access when a specific user from a specific group executes specific actions on specific resources.
As enterprises move from general LLMs to productivity tools to agents that replace entire business processes, production visibility isn’t just a nice-to-have. It’s the foundation for security in an AI-driven world.





