Security

AI Bots Are Draining Publisher Revenue: What CISOs Need to Know | Steve Winterfeld, Akamai

0

Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: Cybersecurity, Cloud Security

Digital fraud and abuse aren’t the same thing. One involves deception and credential theft. The other involves unauthorized scale that drains revenue without breaking a single law. Understanding this distinction is now critical for security teams defending APIs, GenAI deployments, and public-facing services.

Steve Winterfeld, Advisory CISO at Akamai, recently broke down this challenge in stark terms: AI bots are querying publishing sites at unprecedented scale, delivering zero-click searches that bypass advertising and membership models entirely. The result? Publishers lose revenue. Marketing metrics collapse. And 63% of AI bot triggers now target this single industry.

The Fraud vs Abuse Distinction

Winterfeld defines fraud through the lens of deception. Social engineering. Account takeover. Credential stuffing attacks where stolen credentials from one site unlock banking access on another. These attacks exploit human behavior—password reuse, weak credentials, trust in familiar interfaces.

Abuse operates differently. An AI bot scrapes a news site hundreds of thousands of times. It extracts data. Delivers answers to users. Never clicks an ad. Never converts to a membership. The site gets used exactly as designed—just at scale that destroys the business model.

“Those are zero-click searches,” Winterfeld explains. “All those websites who sell advertising or sell membership or have revenue generation through people visiting them—they’re getting none of that.”

This isn’t hypothetical. Publishing faces 63% of all AI bot triggers. Every query represents lost revenue. Every scraped article feeds a large language model that removes the need to visit the original source.

The Three-Layer GenAI Security Challenge

For CISOs, GenAI creates a cascading security problem. Winterfeld outlines three distinct layers:

Employee protection comes first. Marketing teams leak unreleased material into ChatGPT. Developers paste sensitive code into public models. Internal data escapes through prompts designed for productivity.

Internal capabilities require different controls. Organizations buy GenAI tools for specific workflows. These need governance, access controls, audit trails. The risk surface expands with every new deployment.

External services create the most complex challenge. Public-facing GenAI integrates with APIs. Machine-to-machine interfaces operate at scale attackers can exploit instantly. One misconfigured endpoint. One unprotected API. Attackers probe it millions of times in minutes.

Winterfeld emphasizes the API connection: “Those external services tie very closely to APIs, those machine interfaces. Now I’ve got GenAI facing publicly, and those APIs or bots are just causing huge scale.”

Credential Stuffing: The Persistent Threat

Credential stuffing remains surprisingly effective. Attackers steal credentials from one breach. Test them across banking sites, retail accounts, email providers. Simple password reuse—what Winterfeld calls “the grocery store password gets you into the bank”—enables account takeover at scale.

This attack methodology relies on volume. Millions of stolen credentials. Automated testing against thousands of sites. Low success rates still yield profitable results when attackers operate at bot-driven scale.

Defenders face asymmetric economics. Every credential must be protected. Every login attempt analyzed. Every user educated on password hygiene. Attackers just need one match.

What This Means for Security Teams

The fraud-abuse distinction reshapes defensive priorities. Fraud demands identity verification, authentication controls, behavioral analysis. Abuse requires rate limiting, bot detection, API security.

Publishing’s revenue crisis signals broader implications. Any business model built on user engagement faces similar threats. AI-powered scraping. Zero-click value extraction. Automated queries that bypass monetization entirely.

Security teams must protect three surfaces simultaneously: human users making mistakes, internal tools processing sensitive data, and public APIs facing bot-driven scale. Traditional perimeter security doesn’t address this. Neither do standard authentication controls.

The answer requires layered defense. Bot management. API security. GenAI governance. User education. Each layer addresses different attack vectors. Each responds to different threat models.

Winterfeld’s framing makes the challenge clear: “As a CSO, I have three problems with GenAI. I need to protect my employees using them. I have internal use where I’m buying capabilities. And I have external services.”

Security leaders who treat fraud and abuse as identical threats will miss half the battle. The distinction matters. The response must match the methodology. And the stakes—from publishing revenue to API security—keep rising.

StarlingX 11.0: Why Telcos Trust This Open Source Edge Platform for Mission-Critical Workloads

Previous article

Why Private Cloud Is Surging Again: Dominic Wilde on AI, Sovereignty, and the Metal-to-Models Journey

Next article