From attack surface to decision-making systems, artificial intelligence has radically altered the risk calculus for enterprise cybersecurity teams. At the forefront of this conversation is Steve Winterfeld, Advisory CISO at Akamai, who outlined the multi-dimensional AI security landscape in his interview with TFiR at RSA Conference 2025.
AI Security: A Three-Front Battle
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
Winterfeld broke down AI security into three distinct categories: “You’ve got to really protect your employees from using AI, protecting your use of AI. And if you have AI involved in your product, that’s a different protective schematic.”
These aren’t theoretical concerns. Employees experimenting with generative AI can unknowingly leak sensitive data. Internal tools using AI models must be validated for security, privacy, and explainability. And when AI becomes a core component of a product—such as in content filtering or decision automation—the stakes and risk models change entirely.
Agentic AI: The Game Changer
A key theme was the rise of agentic AI—systems capable of making autonomous decisions based on data inputs and learned behavior. As AI evolves from reactive to proactive, the potential for misuse (or misfiring) becomes more pronounced.
Winterfeld shared a striking example of AI’s current capabilities in the wild: “I watched a talk on someone who used AI to capture the flag, and it was brilliant… they literally would just take questions, put them into AI, and with the right prompts, were getting the answers.”
This dual-use potential makes agentic AI both a breakthrough and a security risk. Security teams must prepare for scenarios where attackers, too, harness these tools.
Defending Against AI-Driven Threats
Threat actors have already started leveraging AI for crafting more convincing phishing emails and executing sophisticated social engineering campaigns. The “speed and scale” AI brings to adversarial operations means traditional defenses must evolve fast.
Winterfeld emphasized proactive risk modeling and constant adaptation: “Security leaders need to constantly reevaluate where AI is embedded and how it could be exploited.”
Implications for Developers and Tech Leaders
For developers and SREs, integrating AI securely requires:
- Guardrails for data inputs and model outputs
- Transparency in AI decision logic
- Isolation layers when embedding AI in products
- Strong internal education for employees experimenting with tools like ChatGPT or Copilot
Tech leaders must invest not only in tooling but in governance—because bad AI practices won’t just lead to bugs; they could lead to breaches.
Final Take
AI isn’t just a new technology—it’s a new attack surface, new threat actor capability, and new internal liability. For CISOs and DevSecOps teams, that means developing entirely new playbooks.





