Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: Cybersecurity
AI bot traffic is hammering healthcare and financial services, but the answer isn’t shutting down innovation. Steve Winterfeld, Advisory CISO at Akamai, cuts through the noise: regulated industries face a unique challenge as they race to adopt generative AI while managing compliance frameworks that weren’t built for this technology. The stakes are high—over 90% of AI bot triggers in healthcare are scraping activities, and health insurance data commands premium prices in criminal markets. So how do you move fast without breaking regulations?
The Bot Traffic Reality Across Industries
Commerce led the charge into GenAI adoption and paid the price. Akamai observed more than 25 million bot requests targeting commerce sites during a two-month period. Why? Because these companies moved revenue-generating capabilities into large language models early, and criminals followed immediately. Every customer interaction with a GenAI commerce assistant creates an opportunity for automated fraud.
Healthcare faces a different but equally serious threat. More than 90% of AI bot activity in the healthcare sector stems from scraping and training bots conducting reconnaissance. Winterfeld emphasizes the breadth of this challenge: “When I say healthcare, most of us think hospitals, but it’s pharmacy, medical devices, healthcare insurance.” On the insurance side, stealing healthcare information ranks among the most profitable activities for cybercriminals, making this sector a prime target despite lower overall traffic compared to commerce.
Financial services sees similar patterns, with over 80% of AI bots classified as training and search bots. But beneath that reconnaissance lies competitive scraping and fraud opportunity mapping. These aren’t random attacks—they’re calculated efforts to understand systems before monetizing vulnerabilities.
The Compliance Framework Challenge
Trying to avoid AI entirely is what Winterfeld calls “a fool’s errand.” Heavily regulated industries may move slower, but they can’t opt out. The question becomes: what does responsible AI implementation look like when compliance is non-negotiable?
The principle is straightforward: build safe, transparent, fair, and accountable systems. But implementation depends on impact assessment. Buying a toothbrush online carries minimal risk. Applying for a mortgage or credit card? That’s high impact, demanding greater accountability and stronger controls.
Regulatory frameworks are emerging to codify these principles. The European Union’s AI Act led the way, followed by documentation from the National Institute of Standards and Technology (NIST) in the United States. At the state level, Colorado became one of the first to implement AI-specific legislation, mirroring the patchwork approach seen with privacy laws before federal standards emerged.
Building a Program Approach That Works
Winterfeld recommends treating AI governance as a coordinated program, not an isolated project. Start with cross-functional coordination: legal and compliance teams, risk management councils, cybersecurity teams, vendor management, and IT must work together. This is a team sport.
Implement tiered risk assessment based on impact. Focus resources on high-impact scenarios where mistakes could trigger class action lawsuits or regulatory penalties. Model transparency becomes critical here—data lineage maps, validation reports, and testing documentation must be built from the start, not retrofitted during an audit or investigation.
Privacy-preserving capabilities matter just as much in AI as in traditional systems. Hashing, tokenization, encryption, and access controls remain foundational. Continuous monitoring creates feedback loops to validate performance and catch drift before it becomes a compliance violation.
Finally, formalize cross-functional oversight, especially in larger organizations. You need to demonstrate what you did, not just claim you did it. Documentation proves accountability when regulators come asking questions.
What This Means for Decision-Makers
The AI adoption race won’t wait for perfect compliance frameworks, but regulated industries can’t afford reckless innovation. The organizations that succeed will be those that build governance into their AI initiatives from day one—treating compliance not as a brake on progress, but as a framework for sustainable acceleration. The bot traffic is already here. The question is whether your defenses and processes are ready to match it.





