Security

GenAI Security: The Three Attack Surfaces Every CISO Must Protect Now

0

Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: Agentic AI, Security

Every security leader is facing the same urgent question: What’s our GenAI plan? While artificial intelligence unlocks transformative possibilities for enterprises, it simultaneously introduces attack surfaces that most organizations aren’t prepared to defend. The challenge isn’t whether to adopt AI—teams are already using it—but how to secure it before it becomes your biggest vulnerability.

The Three Dimensions of AI Security

Steve Winterfeld, Advisory CISO at Akamai, sees AI security through three distinct lenses that require fundamentally different protection strategies. First is employee usage—sales teams using AI to generate content, developers leveraging it to review code, and healthcare providers using it to automate patient notes. The risk here is data leakage. When sales teams input information about upcoming product launches into public AI models, that proprietary intelligence can become part of the training data accessible to competitors.

Second is internal toolset integration. Security operations centers can use natural language queries to identify vulnerable servers or analyze threat patterns without requiring advanced coding skills. A junior analyst can now ask where a specific protocol with a zero-day vulnerability exists across the infrastructure and immediately see what data is at risk. This democratizes security capabilities but requires careful implementation to avoid creating new vulnerabilities in the tools meant to protect you.

Third, and where Winterfeld spends most of his time, is AI offered as a service to customers. Whether it’s a chatbot, an ordering system interface, or a recommendation engine, these customer-facing AI systems are directly exposed to attackers worldwide. This is where the security stakes are highest because hackers are actively probing these systems for weaknesses.

Choosing Your AI Model: Public, Private, or Hybrid

The model selection decision fundamentally shapes your security posture. Public AI models like ChatGPT offer broad general capabilities but lack organizational context. If Akamai wants new employees to access internal knowledge about content delivery, cybersecurity services, and cloud operations, a private AI model trained on proprietary data makes sense. This fenced-off approach keeps sensitive information contained while delivering the same natural language interface employees expect.

Hybrid models attempt to balance both worlds by tuning open models with organizational data. Winterfeld emphasizes that the choice depends on function: Are you building something foundational, domain-specific, or task-focused? A fiction author would choose Pseudo, an AI specifically designed for creative writing, over ChatGPT—not because ChatGPT can’t write stories, but because the specialized tool provides a better tailored experience.

Then there’s the agentic AI question. Should your security system automatically segment compromised servers when a zero-day emerges, or do you want a human approving each action to prevent operational disruption? These architectural decisions have massive security and business implications.

Navigating the Compliance Maze

AI regulations vary dramatically by region, and sisos must navigate this complexity while maintaining operational efficiency. Colorado currently has the most aggressive AI law in the United States, focusing heavily on impact assessment. If AI is making financial decisions, determining school admissions, or influencing other life-changing outcomes, there’s a higher compliance bar.

The key requirement? Auditability. When an AI denies someone a loan, you must be able to explain why based on specific factors like credit score and payment history. If the system responds with “magic algorithm,” that’s a compliance failure. Winterfeld points to a real example from early machine learning systems that were less likely to hire candidates from all-girls schools—not because of explicit gender bias, but because the historical training data reflected fewer women in certain roles. These hidden biases only emerge through rigorous auditing.

Europe’s EU AI Act is already in force, and Asian countries are implementing their own frameworks. The common thread is scalability: the more AI impacts people’s lives, the higher the compliance expectations and audit requirements.

AI-Powered Threats Are Accelerating

Both attackers and defenders are weaponizing AI, and the threat landscape is evolving faster than most security teams can adapt. Winterfeld sees AI making traditional attacks more sophisticated in several ways. Social engineering attacks now nail cultural nuances that previously exposed foreign threat actors. Low-skilled attackers can use vibe coding to build malicious bots without deep programming knowledge. Scraper bots are harvesting information at scale, devastating industries like publishing.

The most concerning trend? Agentic AI running autonomous attacks. These aren’t bots following rigid scripts—they’re adaptive systems that adjust tactics in real-time based on defensive responses. Akamai’s visibility across the web and edge reveals a massive surge in AI bot activity, with criminal AI services like FraudGPT and WormGPT operating openly.

Building Your AI Security Foundation

Security leaders can’t approach AI with a “just say no” mentality. The technology is too powerful and adoption too widespread. Instead, Winterfeld advocates for education, visibility, and smart risk mitigation. Security teams need new skills: SOC analysts must understand prompt engineering, compliance teams need to audit AI decision-making processes, and forensic investigators must develop capabilities for examining AI systems (even if some platforms don’t currently support traditional forensic methods).

The DevOps evolution required security teams to rethink file integrity monitoring for containerized environments and adapt to API-based architectures with business logic attacks. Now AI brings another transformation. Security professionals must learn how AI can be integrated safely rather than resisting adoption.

Critical tactical steps include implementing AI firewalls for customer-facing services, establishing clear policies for employee AI usage, and ensuring vendor agreements include audit rights and logging capabilities for investigation purposes. Most importantly, organizations need cultural transformation—teams that lean into AI, get educated, leverage it effectively, and continuously optimize rather than fear it.

The question isn’t whether your organization will use AI. Your employees are already using it. The question is whether your security strategy has caught up to that reality.

How Espresso AI Is Helping Enterprises Cut Databricks Bills by Half

Previous article

CNCF Turns 10: How Kubernetes Built the Cloud Native Ecosystem | Alex Chircop, Akamai

Next article