Guest: Christopher “CRob” Robinson
Organization: Open Source Security Foundation (OpenSSF)
Show Name: 2026 Predictions
Topic: Cybersecurity
The security landscape in 2026 will be shaped by two forces: AI-enabled threats and regulatory mandates that require organizations to prove they have done their homework. Christopher “CRob” Robinson, Chief Security Architect at the Open Source Security Foundation (OpenSSF), sees this year as an inflection point, where compliance frameworks like the EU’s Cyber Resilience Act (CRA) trigger a global shift in how organizations manage software risk.
The First AI-Orchestrated Major Breach Is Coming
Robinson’s first prediction is stark: a major security incident involving AI technologies is coming. “You can’t log on to the internet or watch a news program without some mention of artificial intelligence technologies,” Robinson explains. “It’s a double-edged sword. There’s great benefit for organizations—it helps optimize things and upskill people—but those same benefits also apply to the bad guys.”
The concern isn’t just that AI will be used in attacks—it’s the speed and adaptability it enables.
“I fear that in 2026 we’re going to see some type of major Heartbleed- or Log4Shell-style incident that involves AI technologies. It’s the speed with which these tools work. I don’t know that defenders will be able to keep pace as it adjusts automatically,” Robinson says.
Human Error Remains the Top Risk
Despite growing attention on sophisticated attacks, Robinson’s second prediction is sobering: human error will continue to be the leading cause of data breaches. This assessment is grounded in decades of data from Verizon’s annual Data Breach Investigations Report.
“Every year, consistently, the single most cited root cause for all of these big industry breaches has been attributable to the human factor,” Robinson says. “It’s not some sexy or exotic zero-day problem. It’s a human—whether through ignorance, lack of training, missing tools, or simply making a mistake.”
SBOMs: From Creation to Actionable Intelligence
The industry has largely solved the challenge of creating Software Bills of Materials (SBOMs). Now the focus is shifting toward extracting value from them. “I think 2026 is going to be the year when we see the conversation shift from creation to assurance and aggregation,” Robinson explains.
Large enterprises may have thousands of SBOMs. “You need to sift through them to get wisdom out,” he adds. The missing piece is attestation. Unlike signed software packages, SBOMs lack standardized assurance mechanisms.
“We don’t have an official channel for that for SBOMs,” Robinson notes. “You need to understand where they came from and whether they’ve been changed since they were initially issued.”
The CRA Triggers a Global Compliance Wave
The EU’s CRA reporting obligations take effect in September 2026, requiring manufacturers and open source stewards to report exploited vulnerabilities and breaches. More stringent requirements—such as secure-by-design practices—come into force in December 2027.
The CRA’s influence extends beyond Europe.
“The CRA has definitely started a trend where, internationally, many governments are considering implementing similar rules,” Robinson says, citing Japan, India, and South Korea.
What Enterprises Should Do Now
Robinson’s advice centers on responsible software consumption and due diligence. “You need to understand whether it’s an open source project or an open source model—the pedigree and provenance, how that software was made, who had access to make changes, and how you’re going to integrate it with your existing practices and security controls,” he says.
The payoff comes during audits.
“The more work you do up front before you ingest a model or a package, the more it pays dividends down the road,” Robinson adds.
The AI Opportunity
Despite the risks, Robinson also sees opportunity. He points to OpenSSF’s work with DARPA on the AI Cyber Challenge, where AI tools achieved high success rates in identifying vulnerabilities and generating fixes. “Finding a problem and then writing a fix is exponentially more helpful,” Robinson explains.
“We’re going to see a lot of tools, and people will start to realize that many traditional application security practices—like fuzzing, access control, and network segmentation—are absolutely applicable in the AI space.”
OpenSSF is accelerating its education efforts, developing new courses focused on CRA compliance and risk management using the EU Agency for Cybersecurity (ENISA) framework. The foundation is also expanding beyond traditional conferences to reach practitioners more directly.
Robinson’s message is clear: 2026 will test whether organizations can balance AI’s opportunities with the discipline required to manage new attack surfaces and meet escalating compliance obligations.





