Guest: Sumeet Singh (LinkedIn)
Company: Aptori
Show Name: An Eye on AI
Topic: Application Security
Application security teams are buried under a flood of vulnerability alerts, most of which are false positives. This noise wastes valuable time and allows real flaws to slip through. Aptori is tackling this long-standing problem by rethinking how vulnerabilities are identified, triaged, and prioritized. In this conversation, Sumeet Singh, Founder and CEO of Aptori, joins Swapnil Bhartiya to discuss how the company’s new AI Triage feature helps organizations focus on what truly matters.
Singh explains that while the pace of software development has accelerated dramatically, most of the security tools used today were designed decades ago for waterfall models and slow release cycles. As continuous delivery and AI-driven development take hold, these outdated systems are breaking under pressure, generating endless alerts without context. The result is alert fatigue and a loss of trust in the security process.
Aptori’s AI Triage aims to restore clarity. Instead of relying on static pattern matching, the system understands the logic, structure, and context of an application. It analyzes how data flows through the code to determine whether a reported issue is genuinely exploitable. This “AI-powered security engineer” helps developers and analysts focus on real problems instead of noise. Singh also addresses the industry’s growing “AI washing” problem, where many vendors merely rebrand their old tools with AI labels. Aptori’s approach combines traditional AI, NLP, and machine learning to model code and use large language models only for assistive tasks like data generation—ensuring determinism, consistency, and accuracy.
The discussion explores how large enterprises can use AI Triage to achieve consistent, deterministic security at scale. Singh also shares his long-term vision: a future where security is embedded from the start of software creation—making “secure by design” a reality in the AI era.
Here is the edited Q&A of the interview:
Swapnil Bhartiya: If you’re part of an application security team, you know the struggle—constant floods of vulnerability alerts, most of which are not real threats. You waste time chasing noise while real flaws slip through. What if you could cut straight to the vulnerabilities that actually matter? This is the promise behind Aptori’s new AI Triage feature. Today I’m joined by Sumeet Singh, Founder and CEO of Aptori, to explore how their approach could change the day-to-day reality of AppSec teams. Sumeet, it’s great to have you. Tell me about the company and what problem you saw in this space.
Sumeet Singh: Security might seem like a solved problem, but with how fast software is being built today, it’s anything but. The rate at which we can create, deploy, and scale new applications is unprecedented, and that speed introduces huge challenges. How do you ensure that all this software is secure? That’s what led to the creation of Aptori. Our mission is to help enterprises build secure software in this AI era—when software is being created almost at the speed of thought.
The gap we saw is that many existing security tools are decades old. They rely on pattern matching instead of understanding the actual logic or context of applications. As a result, developers and analysts are flooded with alerts that don’t reflect real risks. Aptori takes a different approach. Our platform understands your code, context, and architecture. It performs deep analysis to automatically triage the issues that really matter so teams can stay secure while releasing software faster.
Swapnil Bhartiya: You mentioned legacy tools. Why is the traditional approach to application security proving so ineffective today?
Sumeet Singh: Most existing tools were designed for a time when software followed the waterfall model. You had long release cycles and plenty of time for testing. Those tools would run point-in-time scans, generate thousands of issues, and teams would spend weeks sorting through them. Then came continuous integration, DevSecOps, and shift-left movements, but the industry simply took those same old tools and put them into CI/CD pipelines. The result? More noise, more frustration, and less trust.
Now, with AI accelerating development even further, these old systems are cracking. They can’t keep up with the velocity of code changes. Developers get huge reports filled with hundreds or thousands of alerts, most of which are irrelevant. They lose confidence in the process, ignore results, and security suffers. Without re-engineering both the tools and the approach, we risk going backward instead of forward.
Swapnil Bhartiya: You’ve launched AI Triage to address this. How does it work, and how does it avoid the false positives that plague traditional systems?
Sumeet Singh: The key idea was to replicate what a skilled human security engineer does. When analysts see an alert, they review the code, trace the data flow, and understand the control logic to decide whether it’s a real vulnerability. We’ve taught AI to do exactly that. AI Triage analyzes the alert, steps through your code, and interprets the context to determine whether an issue is truly exploitable or not. It provides reasoning for its conclusions, giving teams confidence in what to fix first.
Traditional risk scoring tries to rank vulnerabilities using mathematical formulas—where it runs, what component it affects, and so on. But that math never verifies whether the issue is actually real. AI Triage, on the other hand, validates the fault itself. It provides deterministic reasoning, which is far more valuable than heuristic scoring.
Swapnil Bhartiya: Every new tool can add complexity. How does AI Triage change day-to-day life for developers and security teams?
Sumeet Singh: One of the biggest challenges in large organizations is achieving consistency and determinism. You have many teams working on different projects with different practices, but the security team has to ensure everything is uniformly secure. AI Triage brings that consistency back. It acts like an AI-powered security engineer that works the same way every time—analyzing, reasoning, and reporting with consistency. It gives organizations determinism at scale, something humans alone can’t achieve because there just aren’t enough of them. With software being written ten times faster than before, automation like this is essential.
Swapnil Bhartiya: Many vendors today claim to have “AI-powered” tools. How is Aptori’s approach different?
Sumeet Singh: There’s definitely a lot of AI washing going on. Many products simply plug in large language models and call it AI. We use AI in a more deliberate way. Our foundation is built on traditional AI, NLP, and machine learning to build semantic models of code and applications. That helps us reason about what’s actually happening in the software. We then use LLMs in a supportive role, such as generating input data when needed, but the logic and analysis come from our deterministic AI models.
The problem with pure LLM-based systems is that they’re non-deterministic—they may give you different answers each time. In security, that’s unacceptable. You need consistency and the ability to verify reasoning. That’s why we keep LLMs as assistants, not the core decision-makers.
Swapnil Bhartiya: As enterprises start adopting AI-driven development at scale, how do you see Aptori shaping the future of secure software development?
Sumeet Singh: Our vision is that security should never be a bottleneck. It should be a built-in design principle. As AI speeds up development, we need equal innovation in how we validate and secure code. The reality today is that AI helps us write code faster, but it doesn’t make that code more secure. In fact, it’s expanding the attack surface.
Aptori’s goal is to provide that missing layer of trust and confidence. As software creation accelerates, our tools ensure that what’s being deployed is actually secure. Eventually, security will blend seamlessly into the development process, giving developers the freedom to innovate without sacrificing safety.
Swapnil Bhartiya: Can you tease what else Aptori is working on?
Sumeet Singh: Application security is a huge domain with many challenges. Our immediate focus is to keep expanding the depth of our analysis—to ensure that all code, whether static, dynamic, or running in production, can be validated for security. We also plan to extend our work into adjacent layers, such as configurations and possibly network-level insights. The long-term vision is that no matter how or where software is produced—by humans or AI—it should be secure by default, without slowing anyone down.
Swapnil Bhartiya: Sumeet, thank you for joining me. This has been a fascinating discussion.
Sumeet Singh: Thank you, Swapnil. It was great speaking with you.
Swapnil Bhartiya: And to our viewers, how do you see AI reshaping AppSec in your organization? Share your thoughts, and don’t forget to subscribe to TFiR for more conversations like this.





