Security

How Do You Detect AI Bots When They Act Exactly Like Humans? | Rupesh Chokshi, Akamai

0

Guest: Rupesh Chokshi (LinkedIn)
Company: Akamai
Show Name: Secure By Design
Topic: Agentic AI

The question keeps security teams awake at night. AI bots have crossed a threshold. They mimic human behavior with precision that defeats traditional detection methods. Rupesh Chokshi, Senior Vice President and General Manager of Application Security at Akamai, addressed this challenge head-on in a recent conversation with TFiR.

The old playbook is failing. Signature-based detection and rate limiting were built for a different era. Chokshi explained that as AI bots become more sophisticated, security must evolve at the same pace. Akamai launched Firewall for AI specifically to address this shift. The product deploys guardrails against prompt injection attacks and implements OWASP Top 10 protections for input and output validation. Every customer running a Gen AI application, LLM, or chatbot needs this layer of defense.

But detection goes deeper than firewalls. The fundamental question has changed. Instead of asking if traffic is human or bot, security teams now ask if a machine is good or bad. Will an agentic AI follow through on its intended seven-step process? Or will it deviate in ways that signal malicious intent? Intent verification becomes critical. Chokshi emphasized that understanding the purpose behind an interaction helps identify when something goes wrong.

Verification protocols are evolving rapidly. Cryptographic handshakes and fingerprints establish trust. Security teams need to confirm that the entity on the other end is legitimate, whether human or agentic AI bot. The Model Context Protocol and its evolution to Agentic Context Protocol represent this shift. These protocols create layers of intelligent security specifically designed for AI interactions.

Akamai has a distinct advantage in this fight. The company sees majority of internet traffic. That visibility generates massive datasets covering good and bad traffic patterns across industries. When credential stuffing happens in financial services or attacks target airlines, Akamai correlates intelligence across verticals. Machine learning and AI models, both supervised and unsupervised, power detection systems that learn from this data.

Fighting AI with AI is not just a slogan. It is operational reality. When attacks arrive at machine scale with intelligent automation, defense must respond with equal speed and sophistication. Chokshi pointed out that speed matters fundamentally. The ability to detect, adjust detections, and plug holes in real time determines success or failure. Manual processes cannot compete.

The security industry is converging on this challenge. Smart teams across companies are collaborating on solutions. Threat research, threat intelligence, data science, and engineering groups are working together. The pace of innovation reflects the urgency. As agentic AI becomes the default mode of interaction, security architecture must adapt or fail.

This shift affects every digital business. Whether you run e-commerce, SaaS platforms, or enterprise applications, AI-driven automation is coming. The question is not if but when your systems will face sophisticated bot traffic. Understanding intent, deploying AI firewalls, implementing verification protocols, and leveraging machine learning for detection are no longer optional. They are table stakes for survival in the AI era.

Kubernetes 1.35 Brings In-Place Pod Updates and Native Identity to Production | Drew Hagen

Previous article

Why CISOs Must Build LLM Skills Now Before AI-Managed Attacks Arrive | Steve Winterfeld, Akamai

Next article