At RSAC 2026, AI Redefines the Future of Penetration Testing
Penetration testing is undergoing a substantial shift as AI reshapes both attack and defense strategies.
[un]prompted 2026 – Opening Words – “Research Conferences Aren’t Effective.”
Penetration testing is undergoing a substantial shift as AI reshapes both attack and defense strategies. At RSA Conference 2026, multiple vendors pointed to the same underlying pressure: Attack surfaces are expanding more quickly, while the time required to detect and address weaknesses is shrinking. That shift is being driven in part by the rise of AI-enabled adversaries, which can probe systems at scale and speed.
Penetration testing (pen testing) involves simulating cyberattacks on a system to identify vulnerabilities before attackers can exploit them. It goes beyond automated scans by attempting to exploit weaknesses the way a human adversary would. The rise of AI is changing both how pen testing exposes weaknesses and what can be done to address them.
One company at the forefront of this evolving cybersecurity ecosystem is Synack, which is combining AI-driven agents with a distributed network of human security researchers to move beyond traditional, point-in-time testing. Mark Kuhr, CTO of Synack, said the company is increasingly relying on autonomous agents to perform reconnaissance across environments, identifying potential attack paths before human testers step in to validate and exploit them.
“Doing good recon is extremely important,” Kuhr said. “AI can perform broad attack surface reconnaissance well, and we can build very targeted attack chains from there. It’s efficient, scalable and fast.” However, as in other areas where AI has advanced security capabilities, human oversight remains critical. “Where AI often falls short is in contextual understanding and creativity. Humans infer business logic that AI will sometimes miss.”
AI has also enhanced Synack’s approach through its ability to operate continuously. “AI can run for days with a strong memory of its own work. Humans, of course, have multiple needs. The benefit is that agents can handle the grunt work,” Kuhr said. There is some hope that AI will help alleviate the burnout and alert fatigue endemic to the cybersecurity community, but it also introduces the risk of generating additional noise. The question is no longer whether organizations are using AI for security, but how.
The environments that penetration testing aims to secure are also evolving rapidly with AI adoption, Kuhr noted. This shift will require humans and AI to work together to enable more continuous testing models. One takeaway from RSAC 2026 is that there is no shortage of innovation in how organizations are applying AI to cybersecurity. To what extent today’s AI systems can be trusted to reliably identify critical vulnerabilities remains an open question. An AI system that cannot be trusted to catch meaningful risks is not useful, regardless of its ability to scale.
“There are a lot of benchmarks out there for testing AI right now,” Kuhr said. “But the real world is more random and difficult than a lab. The only way to really evaluate these agents is against the breadth of knowledge of human experts.”
Synack, along with others in the space, is effectively betting that the future of penetration testing lies in a hybrid model, where AI provides scale and speed while humans deliver judgment and context. That balance may ultimately determine how well organizations can defend against increasingly automated threats.
