Security

Why Your SOC Analysts Need AI Prompting Skills Now | Steve Winterfeld, Akamai

0

Guest: Steve Winterfeld (LinkedIn)
Company: Akamai
Show Name: CISO Insights
Topic: Cybersecurity

Security teams face a critical skills gap they might not even realize exists yet. As organizations rush to deploy generative AI, the traditional security playbook falls short. SOC analysts, compliance officers, and forensics teams all need entirely new capabilities—and the clock is ticking.

The New Security Skillset for AI

Steve Winterfeld, Advisory CISO at Akamai, draws on his experience guiding security through multiple technological shifts—from DevOps to API-first architectures—to outline what’s different about AI. The evolution isn’t just technical; it’s cultural and operational.

“We moved into DevOps and had to reteach cyber security teams,” Winterfeld explains. “I had to teach SOC analysts that you can’t put a file integrity manager on a code snippet. You need to work with developers to integrate security differently.” Each wave of innovation demanded new thinking, new tools, and new collaboration models.

Now, with generative AI, the requirements go deeper. SOC analysts need prompting skills to understand how AI systems can be manipulated or misused. Compliance teams need to grasp the complexities of AI governance, not just apply checkbox frameworks. Forensics investigators need to understand how to audit AI systems—even when traditional logging and investigation methods don’t apply.

From Gatekeepers to Enablers

The mindset shift matters as much as the technical skills. Winterfeld emphasizes that security teams can’t position themselves as roadblocks to AI adoption. “I don’t want people to be afraid of AI. I don’t want Luddites on my team,” he states. “I want people leaning in, getting educated, leveraging it and optimizing it moving forward.”

This represents a fundamental change in how security operates. Rather than saying no to AI initiatives, security teams must partner with business units to understand risks and mitigate them to acceptable levels. It’s the same transformation that occurred with DevSecOps, but the stakes are higher and the pace faster.

People, Processes, and Technology

Winterfeld frames AI security through three pillars. People need the right skills and mindset. Processes must include clear policies, governance frameworks, and expectations for AI use. Technology requires AI-specific controls like AI firewalls and vendor management agreements that include audit rights and investigation capabilities.

On the technology front, security leaders must ensure vendor contracts allow them to “pull up the equivalent of logs” from external AI providers. Without this visibility, forensic investigations become impossible, leaving organizations blind to potential incidents or compromises.

Beyond DevSecOps

When asked whether AI security will evolve into something like “AI SecOps” or remain integrated into existing security operations, Winterfeld offers a nuanced view. While some AI-specific tools will emerge, most security work will focus on securing the AI capabilities organizations provide as services.

The pattern mirrors earlier transitions. Just as API security became a specialization within broader application security, AI security will develop its own expertise while remaining part of the security team’s core responsibilities. The key is ensuring teams develop the specialized knowledge while maintaining the broader security perspective.

What This Means for Security Leaders

Organizations can’t afford to wait for perfect solutions or complete frameworks. Security leaders must start now: identify skills gaps, launch training programs, establish governance policies, and implement AI-specific controls. The teams that lean in early will shape how their organizations leverage AI safely. Those that hesitate will find themselves bypassed—or worse, responsible for securing AI deployments they don’t understand.

Winterfeld’s message is clear: AI security isn’t about fear or prohibition. It’s about enabling innovation through informed risk management, skilled teams, and appropriate controls. The technology is moving too fast for any other approach.

The OpenStack Moment for AI: What No One Tells You About MCP | Randy Bias, Mirantis

Previous article

Beyond Dashboards: Danielle Cook on What Teams Should Actually Observe | Akamai

Next article