Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: Cybersecurity
Your AI system is under attack, and your traditional web application firewall isn’t built to stop it. As organizations rush to deploy AI capabilities, a new attack surface has emerged—one that demands specialized defenses. Steve Winterfeld, Advisory CISO at Akamai, outlines exactly what’s at stake and why AI-specific security controls are no longer optional.
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
The OWASP Blueprint for AI Defense
When OWASP released its Top 10 vulnerabilities for Large Language Models, it crystallized what security teams already suspected: AI systems face fundamentally different threats than traditional applications. The framework organizes vulnerabilities into four critical categories that every CISO must address.
First, organizations must protect inputs and prompts. Prompt injection attacks and system prompt leakage represent the front door for attackers looking to manipulate AI behavior. These aren’t theoretical risks—they’re active attack vectors being exploited in production systems today.
“You want to protect your inputs and your prompt, so there’s prompt injection and system prompt leakage,” Winterfeld explains. “You want to make sure that your system isn’t allowing those to happen.”
Data Integrity and Behavioral Controls
The second layer focuses on protecting the data model itself. Data poisoning and supply chain vulnerabilities can corrupt the very foundation of AI systems, undermining trust and accuracy. Organizations investing millions in training models can’t afford to overlook this attack surface.
Behavioral controls form the third defense layer. Improper output handling, sensitive information disclosure, and misinformation—commonly called hallucinations—can damage reputation and expose confidential data. These aren’t bugs to fix later; they’re security vulnerabilities requiring immediate attention.
System Resources as Attack Targets
The fourth category addresses system and automation resource management. Running large language models is expensive, making resource abuse a serious threat. Excessive agency, unbounded consumption, and vector embedding weaknesses can either drain budgets or result in intellectual property theft.
“If somebody attaches an API to yours and downloads your whole model, they’ve basically stolen your IP,” Winterfeld notes. The financial and competitive implications are staggering.
The AI Firewall Imperative
Traditional web application firewalls and API gateways weren’t designed for AI-specific attack patterns. Organizations need dedicated AI firewalls that understand these unique threats. Akamai’s position at the edge gives the company unique visibility into emerging attack patterns across global traffic.
“Most people are finding you need something specific,” Winterfeld emphasizes. “A typical WAF is not going to just be able to get API or AI specific attacks as a general rule.”
DDoS protection remains critical, but it’s just table stakes. The combination of DDoS mitigation, API protection, and AI-specific firewalls creates the defense-in-depth approach modern AI systems require.
Bot Armies and Automated Threats
From Akamai’s vantage point, the threat landscape is intensifying. AI bots are proliferating at an alarming rate. Criminal AI services like Fraud GPT and Worm GPT have democratized sophisticated attacks. Social engineering attacks that once required cultural fluency and language skills are now trivial to execute with Gen AI assistance.
Vibe coding—using AI to write attack code—has lowered the skill barrier dramatically. Attackers who couldn’t code now generate functional exploits through simple prompts. The result is a surge in bot armies conducting continuous, automated attacks.
“The biggest thing we’re seeing is a huge increase of AI bots, and these are attacking continuously,” Winterfeld reveals. “A lot of this is built around scrapers, harvesting information, and having huge impacts on industries like publishing.”
What This Means for Decision-Makers
Organizations deploying AI capabilities face a stark choice: implement AI-specific security controls or accept unacceptable risk. The OWASP Top 10 for LLMs provides a clear roadmap. AI firewalls, API protection, and DDoS mitigation must work together.
Security teams can’t treat AI systems as just another application. The attack patterns are different, the stakes are higher, and the threat actors are increasingly automated. As Winterfeld’s insights reveal, defenders need visibility, specialized tools, and a clear understanding of AI-specific vulnerabilities to stay ahead.





