Security

How CISOs Can Navigate AI Compliance Across Global Regulations | Steve Winterfeld, Akamai

0

Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: AI Governance

AI compliance has evolved from a simple security checkbox to a complex web of regional regulations, cultural considerations, and ethical imperatives. For CISOs serving global audiences, the question isn’t whether to comply—it’s how to make smart choices when Colorado has different rules than the EU, and Asia introduces its own AI acts. Steve Winterfeld, Advisory CISO at Akamai, cuts through the complexity with a practical framework: understand the function, audit the impact, and design for transparency from day one.


📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot

The Regional Patchwork of AI Regulation

AI governance isn’t uniform. In the United States, Colorado leads with the most aggressive AI law focused on impact assessment. If AI systems make financial decisions or determine access to education, there’s a higher expectation for fairness and auditability. The EU AI Act is already in effect, while several Asian countries have rolled out their own AI regulations. The common thread? Scalability and impact. The higher the stakes of an AI decision, the greater the scrutiny.

“If the AI is making financial decisions or who gets access to school, those have big impacts,” Winterfeld explains. “So there’s a higher expectation there. And we want to make sure they’re fair. Well, how do you make sure it’s fair? You have to be able to audit it.”

For security leaders, this means working closely with legal teams to understand not just cybersecurity requirements but broader compliance obligations that vary by jurisdiction and industry.

The Auditability Imperative

The ability to explain AI decisions isn’t just good practice—it’s becoming a legal requirement. If an AI system denies someone a personal loan, regulators expect organizations to articulate why. Was it credit score? Debt-to-income ratio? Five specific factors that can be reviewed and challenged? If the answer is “it’s a magic algorithm,” that’s not compliance—that’s liability.

Winterfeld emphasizes that auditability must be built into AI systems from the design phase. “If you don’t design the AI to provide you the why and how it did something, it can’t,” he notes. This means organizations in heavily regulated industries should work with AI vendors that have tailored their capabilities for compliance, including temporary data use without permanent storage when required.

The Hidden Bias Problem

Even well-intentioned AI systems can perpetuate bias in subtle ways. Winterfeld shares a striking example from early machine learning applications: A hiring model was less likely to recommend candidates from all-girls schools. The reason? Historical data showed fewer women were hired, and fewer women attended all-girls schools, creating a correlation the model learned to avoid. While the system wasn’t explicitly told to discriminate against women, the bias crept in through a sub-factor that went undetected—until an audit caught it.

“That’s what we need to be able to audit and find out—those kind of weird examples of something going wrong,” Winterfeld says. This underscores why transparency and regular audits are essential, especially for AI systems that impact people’s livelihoods, education, or financial opportunities.

What This Means for Security Leaders

For CISOs navigating this landscape, the path forward requires three key actions. First, understand the function and industry context—heavily regulated sectors demand AI vendors with compliance-specific capabilities. Second, ensure data handling aligns with regional laws, whether that means avoiding permanent data storage or implementing specific privacy safeguards. Third, collaborate with legal and compliance teams to address not just cybersecurity but fairness, transparency, and impact assessment.

As more countries introduce AI legislation, the organizations that succeed will be those that design compliance into their AI systems from the start—not those that try to retrofit it later. The choice isn’t between innovation and regulation. It’s between thoughtful AI governance and costly surprises down the road.

How Mirantis and Nvidia Are Shaping an Open AI Infrastructure Ecosystem for the Enterprise

Previous article

Why Enterprise AI Is Still Crawling—And What Mirantis Is Building for Operational Leverage

Next article