Security

Why CISOs Fail at Fraud Prevention (And the OWASP Framework That Fixes It) | Steve Winterfeld, Akamai

0

Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: Security

Security teams are drowning in alerts, but most can’t answer a basic question: can you see account takeover attempts in real-time? If the answer is no, you have a visibility problem masquerading as a fraud problem. Steve Winterfeld, Advisory CISO at Akamai, doesn’t mince words: “Our industry is very poor at processes, which is why we have a lot of heroes—that one person who can do everything. But that’s a single point of failure.”


📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot

In this clip from TFiR’s conversation with Winterfeld, he maps out a concrete framework for reducing fraud without adding friction or depending on tribal knowledge. The approach starts with accepting a hard truth: blocking doesn’t work. What works is visibility, situational awareness, and disciplined management.

The OWASP Roadmap for Fraud-Specific Security

Winterfeld’s team analyzed OWASP’s top 10 threat models across web applications, APIs, and large language models to identify which controls specifically prevent fraud—not just general cyber crime. The result is a prioritized hit list: access control, authentication, and preventing injection attacks.

“We looked across all of these and said, which ones apply to fraud specifically,” Winterfeld explains. “That maps down to prompt injection, SQL injection, server-side request forgery, business logic abuse, configuration issues, and protected data handling for healthcare, credit cards, and finance.”

This isn’t academic. Every statistic Winterfeld references comes from Akamai’s deployments protecting customer APIs, large language models through their AI firewall, and web applications at scale. When he talks about API abuse patterns or LLM prompt injection rates, he’s describing real attack telemetry, not vendor hypotheticals.

The framework creates focus. Instead of tuning every control simultaneously, security teams can tell developers: “For the next quarter, we’re preventing injection attacks. That’s it.” Product teams get clear acceptance criteria. SOC analysts know what alerts matter. Leadership understands the ROI.

Why AI-Specific Controls Are Non-Negotiable

Generic web application firewalls won’t stop LLM-targeted attacks. Winterfeld is adamant that AI workloads need purpose-built security controls designed for large language model attack vectors. “Deploy AI-specific security controls—the controls designed for large language models,” he advises. “Right behind that, APIs are what’s interacting with them.”

The attack surface is fundamentally different. Traditional web apps deal with known input patterns. Large language models accept natural language, making prompt injection a category of threat that didn’t exist three years ago. Business logic abuse—where attackers don’t break anything but exploit intended functionality—is far more common in API and AI contexts than in legacy web stacks.

DDoS protection, interestingly, didn’t make the OWASP top 10 but remains essential. Winterfeld treats it as table stakes, separate from the fraud-specific control stack but equally critical for operational resilience.

From Tactical Controls to Strategic Alignment

Technology is the easy part. The harder conversation happens with the board: what’s our risk appetite for bot traffic? Are we okay with “frenemies”—competitive scrapers who consume resources but don’t directly harm users? How much customer friction is acceptable to reduce fraud?

“How can I work with my board to have a risk-based bot management approach?” Winterfeld asks. “Am I okay with having some friction that my customers would experience to reduce fraud, reduce scraping, reduce my cyber crime impact costs?”

This isn’t a technical decision. It’s a business tradeoff that requires executive buy-in. A financial services CISO might accept zero fraud tolerance and higher friction. An e-commerce leader might optimize for conversion and tolerate more risk. Neither is wrong—but the choice must be explicit, documented, and aligned with business objectives.

Winterfeld’s final recommendation ties it together: use external frameworks like OWASP to build your program quickly and defend your decisions. “If you’re in some kind of class action, lawsuit, or audit finding, you can go back and say why you made your decision and that you did use best practices.”

The Visibility Litmus Test

The simplest way to audit your fraud prevention posture: ask your team if they can see scraping activity and account takeover attempts. If the answer is no, your controls aren’t configured correctly—or you don’t have the right controls at all.

“If I can’t see those examples, that’s a prime use case, then I’m not sure I trust those controls,” Winterfeld notes. Visibility isn’t optional. It’s the foundation for every downstream decision about tuning, blocking, and resource allocation.

For teams just starting, Winterfeld’s advice is liberating: you don’t need to build everything from scratch. Leverage OWASP’s research, deploy AI-specific and API-focused controls, establish visibility, and align with leadership on acceptable risk. The framework is already built. The data is already collected. What’s missing is the process to operationalize it—and that’s exactly what this clip provides.

Stop Choosing Between Cost and Resilience: The Multi-Cloud Approach | Greg Tucker

Previous article

Kubernetes Becomes the Web Server for AI: How CNCF Priorities Are Shifting | Alex Chircop, Akamai

Next article