The Vuln Surge is Coming. CSA is Telling Us How to Survive It
There is a lot of noise right now around AI and cybersecurity. Some of it is breathless. Some of it borders on panic. That is why the recent MythosReady draft report from the Cloud Security Alliance deserves recognition.
Anthropic Mythos AI Model Strikes Fear in Trump Administration, U.S. Banks
There is a lot of noise right now around AI and cybersecurity. Some of it is breathless. Some of it borders on panic. That is why the recent MythosReady draft report from the Cloud Security Alliance deserves recognition.First and foremost, this was a serious effort by serious people.The CSA brought together a remarkable group of contributors, including Gadi Evron, Rich Mogull, and a virtual who’s who of respected voices from across the cybersecurity community. Anyone who has ever tried to coordinate a collaborative industry paper knows how difficult that is to pull off. So, before getting into analysis or critique, it is important to simply say thank you. This kind of work helps move the conversation forward when the industry needs clarity.And clarity is exactly what the MythosReady report tries to provide.Much of the recent discussion around AI systems like Anthropic’s Mythos has been framed in terms of existential fear. If AI can analyze code at machine scale and discover vulnerabilities across massive software ecosystems, then what happens next? The easy reaction is panic. The MythosReady report does something far more useful. It steps back and approaches the issue in a cold, rational way.You can read the draft report yourself here.The report essentially lays out a preparation plan for what many are calling a coming vulnerability surge. AI systems capable of reviewing enormous code bases could dramatically accelerate the rate at which software flaws are discovered. That means security teams, vendors and developers may suddenly face a wave of disclosures far larger than what traditional vulnerability management processes were designed to handle.The CSA effort reframes this moment from crisis to preparation.Instead of asking whether this surge will happen, the report focuses on how organizations should respond if it does. It talks about operational readiness, coordinated disclosure processes, automation in remediation pipelines and stronger collaboration across the ecosystem. In many ways the report functions as a playbook for navigating the turbulence that may accompany AI-driven vulnerability discovery.That alone makes it one of the most valuable industry contributions we have seen on this topic.But there are two areas that deserve a little more attention as this conversation evolves.The first has to do with exploitation.Much of the discussion around Mythos focuses on the discovery of vulnerabilities. The assumption is that AI will simply find more bugs. But discovery has never been the whole story.My friend Jeremiah Grossman lays this out very clearly in a recent blog post.For years, the industry has understood that the vast majority of vulnerabilities are effectively harmless. They may exist in code, but they are not reachable or exploitable in ways that create real risk.Estimates vary, but I have seen numbers that suggest something north of 97% of vulnerabilities fall into that category. They are there, but they cannot realistically be used to cause damage.That reality created a kind of equilibrium in the system.Security researchers could only find vulnerabilities at a human pace, and only a small fraction of those discoveries translated into real attacks. Exploit development required deep expertise. Skilled exploit writers were rare. In many cases, attackers simply purchased weaponized exploits from the small number of researchers capable of producing them.In other words, the system had two natural brakes. Human capacity to discover vulnerabilities and human expertise required to weaponize them.AI potentially changes both.If systems like Mythos can not only identify vulnerabilities but also generate exploit code around them, then the percentage of exploitable flaws could rise. Even a modest shift in that ratio could have serious consequences.Imagine that the percentage of exploitable vulnerabilities moves from 3% to 10%. That may not sound dramatic at first glance, but in practice, it would represent a massive increase in attackable surface area.I have not yet seen this dimension explored in detail in the MythosReady report or elsewhere. That does not mean the CSA authors missed it. It may simply be too early to quantify. And I will be the first to admit that I am not smart enough to know what the correct answer is here.But it is a variable that deserves serious attention as we think about what AI-enabled vulnerability discovery really means.The second reality that deserves discussion is the transition period we are likely to face before things get better.Many people believe that AI will ultimately lead to more secure software. I tend to agree with that. If automated systems can find flaws earlier in development and help developers write safer code, the long-term outcome could be a stronger and more resilient digital ecosystem.But first we have to get through the crucible.In an earlier piece, I described this dynamic with a phrase that still feels relevant today. The operation may be successful, but the patient might die.Some applications and even some companies simply will not survive the remediation wave that could follow widespread AI-driven vulnerability discovery. Organizations that lack the engineering capacity to address large numbers of flaws may be forced to retire systems, abandon products or undertake massive modernization efforts just to stay afloat.That is not fear, uncertainty or doubt. It is simply realism.Part of what the CSA effort may be doing is preparing the industry psychologically for that possibility. If the coming years expose deep weaknesses in the global software stack, we may have to accept that not everything can be saved.These are not abstract questions either. They are already being discussed among practitioners. I recently had the chance to talk about this very topic with Rich Mogull and Mitch Ashley on the Still Cyber, After All These Years podcast over at Techstrong.There is also a broader truth here that leaders like Jen Easterly have pointed out repeatedly over the years. The cybersecurity industry exists largely because software quality has historically been poor. Vulnerabilities are not anomalies. They are symptoms of how modern software is built.If AI ultimately forces the industry to confront that reality and produce better code, the long-term outcome may be healthier than the world we live in today.But the path from here to there is unlikely to be smooth.That is why efforts like the MythosReady report matter so much. They give practitioners a framework for thinking about what may come next. They help move the conversation away from speculation and toward preparation.If you have not read the report yet, you should.Download it. Study it. If you are short on time, ask your favorite AI assistant to summarize it for you.Just do not ignore it.Because preparation is often the difference between chaos and resilience.And right now, preparation may be nine-tenths of the cure.
