Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: Cybersecurity
Your developers are pasting proprietary code into GitHub Copilot. Your sales team is feeding customer data into ChatGPT to polish presentations. Your IT department just bought three different AI subscriptions you don’t know about. Welcome to the GenAI security challenge every CISO faces in 2025.
Steve Winterfeld, Advisory CISO at Akamai, doesn’t sugarcoat it: most organizations are flying blind when it comes to AI security. In a recent conversation, he outlined the three critical battlefronts security leaders must defend—and why the model selection decision matters more than most teams realize.
The Three GenAI Security Fronts
Winterfeld breaks down AI security into three distinct areas, each requiring different controls and governance approaches.
First is employee usage—the wild west of AI adoption. “Sales or marketing is probably out there using some capability to produce content,” Winterfeld explains. “My developers may be using it to review code or write code.” The problem? Each use case creates different exposure risks. When sales inputs next quarter’s product roadmap into a public LLM, that proprietary information could end up in the training model, potentially accessible to competitors.
Healthcare organizations face even stricter requirements. Winterfeld recounts a recent doctor’s visit: “My doctor said, ‘Do you mind if I use AI to take notes today?’ They’re recording our conversation and using that to auto-generate notes.” In regulated industries governed by HIPAA or financial compliance frameworks, these seemingly innocent productivity gains can create massive compliance gaps.
The second battlefront is internal toolset integration. This is where Winterfeld sees the most immediate security value. In a Security Operations Center (SOC), junior analysts can now use natural language queries to perform complex threat hunting that previously required senior-level expertise. “Where do I have this certain protocol that now has a zero day? What data is associated with the servers that have that protocol?” A level-one analyst can ask these questions without writing queries or configuring anything—democratizing threat intelligence across the security team.
The third area—and where Winterfeld spends most of his time—is customer-facing AI services. “That’s probably where I spend most of my time trying to protect it, because now it’s open to the world, and I have hackers directly attacking it.” Chatbots, AI-powered ordering systems, and customer service interfaces create direct attack surfaces that adversaries will actively probe for vulnerabilities.
The Model Selection Decision Matrix
Once you’ve mapped your AI security landscape, the next question is which model to trust. Winterfeld identifies three deployment patterns, each with distinct security implications.
Public or open AI models offer broad capabilities and ease of deployment. But they’re generic. “I can write a better email with it. I can do a lot of things with it,” Winterfeld notes. The tradeoff is zero control over data handling and model training.
Private or proprietary models give organizations complete control. At Akamai, which provides content delivery, cybersecurity, and cloud services, Winterfeld wants new employees to leverage internal knowledge bases through AI interfaces. “I’d love them to be able to do that with a GenAI kind of interface,” he explains. By running a proprietary service, all company data stays within controlled boundaries, and the model trains exclusively on Akamai’s information architecture.
Hybrid approaches attempt to balance both worlds—tuning open models with proprietary data while maintaining some connection to broader training sets. “You’re trying to get the best of both worlds,” Winterfeld says. The decision depends on your specific function: foundational tasks versus domain-specific needs versus highly specialized applications.
He offers a practical example: if you’re writing a science fiction novel, you’d use Sudowrite, an AI specifically designed for fiction authors. “Could you do the same thing in ChatGPT? Sure, but you’re not going to get the same experience. It’s not going to be as easy. It’s not going to be as tailored.”
The Agentic AI Question
The conversation reaches its most critical inflection point when Winterfeld addresses agentic AI—systems that can act autonomously without human oversight. This is where security strategy meets business continuity.
Consider a zero-day vulnerability. An agentic agent could automatically identify affected servers, assess data criticality, implement network segmentation, and deploy additional controls—all without human intervention. Fast, efficient, bulletproof response.
Or is it?
“Do I want a human in the loop to make sure we have no disruption in operation?” Winterfeld asks. It’s the fundamental tension between security speed and operational stability. Automated response can contain threats faster than any human analyst. But it can also segment critical production systems, block legitimate traffic, or create cascading failures that human judgment would prevent.
For Winterfeld, the answer isn’t binary. It’s contextual. The decision framework should weigh threat severity, business impact, recovery complexity, and organizational risk tolerance. Some scenarios demand immediate automated response. Others require human judgment to balance security with operational continuity.
What This Means for Security Leaders
The GenAI security challenge isn’t going away. If anything, it’s accelerating as AI capabilities expand and adoption deepens across every business function. Winterfeld’s framework offers a practical starting point: map your three battlefronts, choose models that match your security requirements, and make deliberate decisions about automation boundaries.
The organizations that get this right won’t just avoid AI-related breaches. They’ll unlock genuine productivity gains while maintaining the security posture their business demands. Those that don’t? They’re one prompt injection or training data leak away from a very bad quarter.





