Guest: Rupesh Chokshi (LinkedIn)
Company: Akamai
Show Name: Secure By Design
Topic: Agentic AI
The rise of malicious AI bots like FraudGPT and WormGPT has created a security dilemma for enterprises. These tools are not your typical automation. They launch sophisticated cyber attacks, generate convincing phishing content, and exploit vulnerabilities without detection. The challenge? Telling them apart from the legitimate AI assistants and agentic AI systems that businesses depend on every day.
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
Rupesh Chokshi, Senior Vice President and General Manager of Application Security at Akamai, recently sat down with TFiR to discuss this critical security challenge. His insights reveal why traditional bot detection methods are no longer enough and what organizations must do to prepare for an agentic AI future.
Unlike legitimate AI bots that follow ethical protocols and provide transparency, FraudGPT and WormGPT operate with malicious intent. They do not provide known handshakes. They seek backdoors. They extract data without permission. If they find an opening, they escalate from data collection to malware deployment and full-scale attacks.
This is not theoretical. Security teams see it every day. The threat landscape has shifted. Traditional perimeter defenses cannot distinguish between a helpful AI agent and a malicious one based on behavior alone. Both can appear intelligent. Both can adapt. Both can automate tasks at scale.
The answer, according to Chokshi, lies in establishing digital trust at every interaction. Akamai recently worked with Amazon on the launch of Agent Core, implementing Web Bot Authentication methodologies that create a secure handshake between AI agents and web services. This authentication layer verifies identity before granting access. It establishes trust through certificates and validated credentials.
But authentication is just the first layer. In an agentic AI world, security teams must also track intent. If an AI agent claims it wants to purchase airline tickets, is it following that path? Or is it suddenly requesting airplane specifications, scraping unrelated data, or attempting remote code execution? Intent monitoring creates a behavioral baseline. Deviations signal potential threats.
Prompt injection attacks represent another emerging risk. Malicious actors can manipulate AI agents by embedding harmful instructions within what appears to be legitimate input. The agent executes these commands without realizing it has been compromised. OWASP has documented these threat vectors extensively, providing security teams with frameworks to understand and defend against AI-specific attacks.
The commerce landscape is changing. As AI agents take on more autonomous tasks, from purchasing to research to customer service, the web itself will transform. Digital handshakes will become the currency of trust. Organizations that establish robust authentication protocols now will be positioned to enable agentic AI safely. Those that do not will face escalating risks from tools like FraudGPT that exploit gaps in traditional security models.
Chokshi emphasizes that this is not a one-time fix. The threat environment evolves. Malicious AI tools become more sophisticated. Defense strategies must evolve in parallel. Security teams need layered approaches that combine authentication, intent verification, behavior monitoring, and threat intelligence.





