Guest: Taylor Smith
Company: Exaforce
Show Name: 2026 Predictions
Topic: Security
Security operations are facing a fundamental shift in 2026. Multi-factor authentication, long considered a cornerstone defense, is showing its limitations against sophisticated attacks. At the same time, AI-generated code vulnerabilities and deepfake-enabled social engineering are emerging as the next wave of threats that security teams must prepare for now.
The Breaking Point of MFA
Taylor Smith, Director of Product Management at Exaforce, believes that MFA alone will no longer provide adequate protection in 2026. “MFA is a great line of defense and can help reduce the attack surface and the number of successful attacks. However, we’ve seen ways sessions can be stolen and other ways attacks can still succeed, even with MFA,” Smith explains.
The prediction reflects a growing reality in security operations: attackers are evolving faster than traditional defenses can adapt. Session hijacking and token theft techniques are bypassing authentication layers, forcing organizations to implement additional controls beyond basic MFA implementations.
AI-Coded Vulnerabilities on the Horizon
Perhaps Smith’s most striking prediction involves the intersection of AI coding assistants and security vulnerabilities. He forecasts that 2026 will see “the next 10/10 CVSS CVE, like Log4j, traced back to a single AI-coded pull request.”
The scenario Smith describes is already taking shape in development environments worldwide. Overwhelmed developers, pressed for time and managing increasing workloads, are incorporating AI-generated code into production systems. “The overwhelmed developer who doesn’t have enough time—who’s monitoring this or reviewing the pull request—will just merge it,” Smith notes.
This prediction underscores a critical gap in current development practices: while AI coding tools accelerate development velocity, the review processes and security checks haven’t kept pace. Organizations need to implement stronger code review protocols specifically designed to catch AI-generated vulnerabilities.
Deepfakes Enter the Phishing Arsenal
Social engineering attacks are becoming increasingly sophisticated, and Smith predicts that 2026 will mark a turning point. “We’ll see the first deepfake used in a major phishing breach,” he states. These attacks will go beyond convincing email copy to include fabricated video evidence of executives making requests.
“You’ll see a deepfake used to make it look like the CEO is asking about it, complete with video evidence—but it’s fake,” Smith explains. The implication for security awareness training and verification protocols is significant. Organizations will need to establish out-of-band verification processes for sensitive requests, even when they appear to come with video confirmation.
Compliance Frameworks Demand Active Testing
Smith’s fourth prediction addresses a fundamental limitation in current compliance approaches. He believes compliance frameworks will begin requiring runtime exercises, including Red Team, Blue Team, and Purple Teaming activities. “It won’t be enough to make sure you have the proper posture,” Smith says. “You’re also going to need the right tooling in place, the right processes, and people who are well trained on the tools you use.”
This shift from passive posture checks to active defense validation represents a maturation of compliance thinking. Rather than simply verifying that controls exist, frameworks will demand proof that those controls actually work under realistic attack conditions.
The Talent Crisis Continues
Despite AI’s potential to augment security operations, Smith sees the skills shortage persisting through 2026. “We’ll still see a continued lack of trust in AI to lean on it, so we’ll still have a significant skills shortage—because AI won’t be enough to offload the work,” he explains.
However, this challenge creates opportunities for early adopters. Organizations that embrace AI-augmented security operations now will gain competitive advantages. “Companies that are early adopters of AI in security will see real, tangible benefits—whether through productivity gains that help them meet organizational needs without scaling headcount,” Smith notes.
Actionable Steps for Security Leaders
Smith’s advice for security leaders is clear: start experimenting immediately. “2025 was the year of experimenting, and 2026 should be the year of implementation,” he says. “Start small. If it’s a small group within a larger organization, or a single area of the organization, start there—then build that trust.”
For Exaforce, the focus is on expanding coverage for custom applications and leveraging AI for enhanced detection capabilities. “We’re focused on expanding our coverage, and a big part of that is handling custom applications out of the box much more effectively,” Smith explains. The company is also enhancing investigation and threat hunting capabilities by leaning on frontier AI models while maintaining a robust data platform.
The message is clear: 2026 will separate organizations that have begun implementing AI-augmented security operations from those still experimenting or waiting. As threats evolve with AI assistance, defenses must evolve in parallel.





