Advertisement

Security leaders say the next two years are going to be ‘insane’

Kevin Mandia, Morgan Adamski, and Alex Stamos tell CyberScoop that AI is finding bugs faster than anyone can fix them, exploit development is accelerating, and most organizations aren't prepared for what's coming.
(Getty Images)

SAN FRANCISCO — Every RSA Conference has its buzzwords. Cloud. Ransomware. Zero trust. Plastered across the 87-acre Moscone Center complex on every booth, banner and bar. This year was AI, with vendors pitching AI-powered solutions to every security problem imaginable. But 2026 stood out for a different reason: Industry leaders spent the conference warning about disruption from the very technology everyone was selling.

In an exclusive discussion with CyberScoop at this year’s conference, Kevin Mandia, founder of AI security company Armadin, Morgan Adamski, former executive director of U.S. Cyber Command, and Alex Stamos, a researcher and former chief security officer at several major technology companies, said the industry is entering what they described as an unprecedented two- to three-year period of upheaval, driven by AI systems that are discovering vulnerabilities exponentially faster than defenders can respond and threatening to render decades of security practices obsolete.

“We are just at the inflection point that is going to be pretty insane, at least two to three years,” Stamos said, describing a near-term future in which AI systems flood the threat landscape with working exploits while organizations struggle to patch vulnerabilities faster than attackers can weaponize them.

Mandia put the timeline more bluntly. “It’s a perfect storm for offense over the next year or two,” he said.

The core problem, according to the executives, is speed. AI has made vulnerability discovery almost trivial, while remediation takes time and effort, creating a widening gap that favors attackers across every stage of the kill chain.

Advertisement

“Because of the asymmetry in the cyber domain, where one person on offense can create work for millions of defenders, speed leverages that asymmetry,” Mandia said. “In the near term, there’s an advantage to the attackers as they start to use models and agents to do a lot of the offense.”

Bug discovery goes exponential

The shift is already underway. Stamos, who is currently chief security officer at Corridor, said foundation model companies are sitting on thousands of bugs discovered through AI-assisted analysis that they lack the capacity to verify or patch. 

“The exploit discovery has gone exponential,” Stamos said. “What we haven’t seen go exponential yet is plugging that into working shellcode that bypasses protections on modern processors. But maybe six months or a year from now” AI will be generating sophisticated exploits on demand.

He pointed to examples of AI systems discovering vulnerabilities in decades-old code that had been reviewed by thousands of developers and professional security researchers. In one case, he said, an AI system identified a flaw in foundational Linux kernel code that humans had overlooked for years.

Advertisement

 “This superintelligent system was able to figure out a way to manipulate the machine into a place that, when you look at the bug, I’m not sure how a human could have found that,” Stamos said.

The pace of discovery is creating what Stamos called “a massive collective action problem.” Each successive generation of AI models could surface hundreds of new vulnerabilities in the same foundational software. “It’s quite possible that all this development we’ve done in memory-unsafe languages, without formal methods, that none of that is actually secure in the presence of superintelligent bug-finding machines,” he said. “In which case we need to be massively rebuilding the base infrastructure we all work on. And nobody is doing that.”

The timeline for when those capabilities become widely accessible is measured in months. When Chinese open-source models, like DeepSeek or Alibaba’s Qwen, reach current American foundation model capability levels, Stamos said, “you’re going to have every 19-year-old in St. Petersburg with the same capability” as elite vulnerability researchers.

Models trained on existing shellcode are already “reasonably good” at generating exploit code, he said, and may be capable of producing EternalBlue-level exploits within a year. That NSA-developed exploit, leaked in 2017, was used in the WannaCry and NotPetya attacks and remained effective for years because of how difficult such capabilities were to develop. 

“Imagine when that becomes available on demand,” Stamos said.

Advertisement

Agents already operating beyond human scale

Mandia’s company Armadin has built AI agents capable of autonomous network penetration that he said would be devastating if deployed maliciously. Unlike human attackers who must manually type commands and wait for results, AI agents operate across hundreds of threads simultaneously, interpolating command outputs before they arrive and launching follow-on actions in microseconds.

“The scale and scope and total recall of an AI agent compromising you and swarming you is not humanly comprehensible,” said Mandia, who founded Mandiant and served as CEO from 2016 to 2024. “If the old way was a red team that would get in, there’s a human on a keyboard typing commands. That’s a joke compared to” what AI agents can do.

Those agents can evade endpoint detection and response systems in under an hour, he said, and operate at human speed to avoid rate-limiting detection mechanisms. Once inside a network, an AI agent can analyze documentation, packet captures and technical manuals faster than humans can read them, designing attacks tailored to specific control systems on the fly.

“When you build the offense, it scares the heck out of you,” Mandia said. “If we let the animal out of the cage today, nobody’s ready for it.”

Advertisement

He said Armadin recently tested a Fortune 150 company with a strong security team and found either remote code execution vulnerabilities or data leakage paths in every application tested. “Both of us were shocked,” he said.

The shift changes the fundamental question boards ask after penetration tests. Historically, directors wanted to know the probability a demonstrated attack would occur in the real world. “In the age of humans, you could never really answer,” Mandia said. “But with AI, it’s 100 percent. It’s coming and it’s going to get cheaper and more effective at the same time.”

Defenders face impossible timelines

The compression of attack timelines is colliding with organizational realities that are moving in the opposite direction. Adamski, who is now the U.S. lead for PwC’s Cyber, Data & Technology Risk business, said chief information security officers face pressure from boards to adopt AI rapidly, often with explicit goals of reducing headcount, even as compliance requirements remain unchanged and the threat landscape accelerates.

“CISOs are getting squeezed in that they cannot stop adoption because of demand from the board, from the CEO,” Adamski said. “None of the SOC 2 requirements have changed. ISO 27000, anything that helps people get through from a compliance perspective, all those rules are exactly the same.”

Advertisement

Stamos said patch cycles illustrate the mismatch. Where previously only sophisticated adversaries could reverse-engineer Microsoft’s Patch Tuesday updates to develop exploits, AI will democratize that capability. “You’re going to be able to drop the patch into Ghidra, driven by an agent, and come up with [an exploit],” he said. “Patch Tuesday, exploit Wednesday.”

Many CISOs are trying to bolt AI capabilities onto existing security operations, an approach the executives said is insufficient. “They’re not stepping back and looking at the bigger picture, that we have a fundamental, much more holistic problem in terms of how to reimagine and redo an entire cyber defense ecosystem that is solely driven by AI machine to machine,” Adamski said.

Avoiding Pandora’s box

The national security implications compound the problem. While other former government leaders talked at the conference about what they saw as the United States’ slipping in offensive cybersecurity, the three industry leaders spoke to what they believe nation-states have developed with the use of AI.

“I think we’re seeing less than 50 percent of the AI capability from modern nation-states right now,” Mandia said. “They’re not pressing. Nobody wants to be the first one to open that door.”

Advertisement

Stamos said the operational tempo favors U.S. adversaries. Russian intelligence services can observe and record data from the hundreds of businesses hit by ransomware daily, using that operational experience to train offensive AI models. “We don’t have that kind of operational pace in the U.S.,” he said.

Adamski said any AI capability the United States develops for offensive cyber operations carries inherent risks. “Anything you introduce, you’re introducing it to an ecosystem that they can use back at us,” she said.

Stamos said AI’s impact on cybersecurity will likely produce harmful consequences before other domains because the threshold for cyber operations is already low. “We allow on a Tuesday to happen in the cyber world what we would consider an act of war if it was in any other context,” he said. “I think this is where AI will be used first to hurt people, will be in cyber.”

Two years, maybe

The executives offered limited optimism that AI could also accelerate defensive capabilities, primarily by making security testing affordable at scale and enabling autonomous response systems. But the timeline for when defensive capabilities might catch up depends on immediate action. 

Advertisement

“Two years if we’re good,” Stamos said. “Two years is the minimum if we actually start really fixing code and refactoring stuff into type-safe languages using formal methods.”

Mandia offered optimism “a few years out” if offensive AI built by defenders successfully trains autonomous defensive systems. But he acknowledged the current state is dire. Organizations will need autonomous systems capable of immediately quarantining anomalous behavior, he said, because traditional detection and response timelines will collapse.

“You’re not going to have time to call Mandiant on a Thursday afternoon, get people in, sign a contract,” Mandia said. “You’re going to have to be able to respond at machine speed.”

Stamos said defenders must assume they cannot patch their way out of the problem and focus instead on defense in depth, particularly around lateral movement and persistence, which remain more difficult for AI to automate than initial exploitation.

But even that assumes organizations have time to prepare. The executives suggested that window is closing rapidly, if it hasn’t already shut for good.

Advertisement

Adamski summed up the reckoning facing the industry: “AI is going to potentially make us pay for the sins of yesterday.”

Greg Otto

Written by Greg Otto

Greg Otto is Editor-in-Chief of CyberScoop, overseeing all editorial content for the website. Greg has led cybersecurity coverage that has won various awards, including accolades from the Society of Professional Journalists and the American Society of Business Publication Editors. Prior to joining Scoop News Group, Greg worked for the Washington Business Journal, U.S. News & World Report and WTOP Radio. He has a degree in broadcast journalism from Temple University.

Latest Podcasts