Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: Cybersecurity, Cloud Security
AI-powered bots aren’t just another wave in the long history of automation. According to Akamai’s latest State of the Internet (SOTI) Fraud and Abuse Report 2025: Charting a Course Through AI’s Murky Waters, they’re now actively reshaping fraud, scraping, compliance, and even the business models of digital companies. In this in-depth conversation, Steve Winterfeld, Advisory CISO at Akamai, walks through what’s really happening and what security leaders need to prepare for next.
A year ago, most discussions around bots focused on familiar automation patterns. That era is now over. Winterfeld explains that the surge in AI-powered bots—many driven by large language models (LLMs)—has pushed fraud and scraping into a level of speed and scale that defenders haven’t seen before. Traditional web and API protections are struggling to keep up, and enterprises are discovering that they must now treat AI interactions as a first-class security surface.
He begins by drawing a critical distinction between “fraud” and “abuse.” Fraud involves deception—such as social engineering, identity theft, or account takeover. Abuse is different. It’s about using a system in a way the owner didn’t intend, such as scraping a site hundreds of thousands of times and siphoning off commercial value. In industries like publishing, this distinction now defines survival. Winterfeld notes that more than 60 percent of AI bot triggers Akamai sees hit publishing sites, which rely on page views and memberships for revenue. Zero-click extraction from GenAI tools bypasses that entire model, leaving publishers exposed even when no “attack” technically occurs.
Other sectors aren’t far behind. Commerce is being hammered by automated shopping agents, competitive scraping, and criminals who now use LLMs to streamline fraud operations. Akamai observed more than 25 million bot requests against commerce sites in just two months, underscoring how deeply commerce has already been integrated into AI-driven workflows. Healthcare and financial services see slightly different patterns—most activity is reconnaissance and training bots scraping sensitive data sources. But the stakes are even higher, because healthcare and financial records are some of the most profitable targets for cybercriminals.
Winterfeld points out that the explosion in traffic isn’t just noise. “Traffic has risen by 300% in the last year,” he says. Some portion of that traffic is legitimate. Much of it is not. But all of it incurs real cost—compute, bandwidth, operational overhead, false positives, and trust decisions that must now be made at machine speed.
A large part of the challenge is that GenAI has dramatically lowered the barrier to entry for would-be cybercriminals. In the past, attackers needed tooling or engineering expertise. Now they can write functional scripts with natural language prompts, or they can pay for criminal LLMs such as WormGPT or FraudGPT, which provide turnkey fraud automation. Gangs that once needed technical specialists can now orchestrate campaigns end-to-end with AI-generated code, deployment steps, and adaptive behavior.
Against this backdrop, every industry faces a question: what is the right strategy for managing AI-driven fraud and abuse? Winterfeld argues that defenders must get better at visibility, not merely blocking. If security teams cannot reliably detect scraping, account takeover attempts, or LLM misuse, they cannot build effective controls. Bot management, API security, AI-specific protections, and DDoS defense must now work as a unified layer—not a collection of separate tools.
Winterfeld highlights frameworks such as the OWASP Top 10 for Web, API, and LLM security as helpful ways to prioritize controls. For fraud-centric threats, he points to access control, strong authentication, and injection prevention as foundational areas. Many organizations still lack the ability to identify business logic abuse or unauthorized scraping, two vectors that now dominate AI-driven attack traffic.
The regulatory side is evolving just as quickly. Europe’s AI Act, NIST’s AI documentation guidance, and emerging state-level AI regulations—such as those in Colorado—are shaping what “safe, transparent, fair, and accountable systems” must look like. Winterfeld emphasizes that organizations in highly regulated industries cannot simply opt out of AI. They must adopt it, govern it, and protect it simultaneously. That requires cross-functional coordination among legal, compliance, security, IT, and vendor management teams.
On the operational side, enterprises must assume that GenAI will accelerate both legal use cases and criminal misuse. That means adopting documentation practices such as data-lineage tracking, validation reports, audit paths, and privacy-preserving techniques like hashing and tokenization. Monitoring must become continuous, and oversight must be formally embedded into existing governance structures.
Looking forward, Winterfeld believes attackers will soon reach a point where AI coordinates entire campaigns—development, deployment, exploitation, and adaptation—without human intervention. That kind of end-to-end automation will change the tempo of cybercrime. Enterprises will need staff who understand GenAI behavior, prompt-driven attack surfaces, and how LLMs can be compromised or manipulated.
For defenders, the mission has two parts: deploy the right controls today and build the right skills for tomorrow. As Winterfeld puts it, blocking alone will not work. Situational awareness and programmatic management must guide how CISOs respond to the next wave of AI-driven threats.





