Threat Modeling with AI: A Developer-Driven Boon for Enterprise Security
For companies running a modern, adaptive and defense-centered security program, threat modeling is not a new concept. In fact, it’s one of the core tenets of preventative cybersecurity best practices.
Academia and the “AI Brain Drain”
For companies running a modern, adaptive and defense-centered security program, threat modeling is not a new concept. In fact, it’s one of the core tenets of preventative cybersecurity best practices. Being able to find vulnerabilities within software or a network, map them out and remediate them – before an attacker can successfully orchestrate a breach – is the best way to navigate a rapidly expanding threat landscape. If you can eliminate any possible avenue of attack before a threat can even launch, then you stand the best chance of keeping your codebase and network safe from harm. The advent of AI coding has created new threat vectors, and significantly increased the enterprise attack surface in multiple ways. Thankfully, in security-proficient hands, AI technology is also a powerful tool for enhancing and accelerating threat modeling. Developers have long struggled to truly claim a seat at the table in traditional threat modeling programs, but with the right skills, they have the opportunity to wield AI responsibly to seriously cut risk and rework in their codebase. Why Developers Struggle With Traditional Threat Modeling Threat modeling has traditionally existed in the realm of security professionals. It’s always been their job to predict the many ways threat actors, with ample time and a range of attacks, can enter a network or compromise software. To accomplish this, they held meetings and brainstormed, improved their knowledge through training, and conducted threat hunting to address nagging “what if” questions about potential vulnerabilities. More recently, they have supplemented their personal knowledge with suites of automated scanners and tools designed to spot hundreds, or even thousands, of potential vulnerabilities in every deployment. Once vulnerabilities were found, the security teams would typically send programs and applications back to developers to fix, especially if a vulnerability was deemed critical or dangerous. While this tended to cultivate an unhelpful “us versus them” mentality between developers and AppSec professionals, the results remained impressive for a long time. It may not have been terribly efficient, but the ends often justified the means. However, times are changing. Despite its previous level of success, the evolving threat landscape is making traditional threat modeling practices increasingly unworkable in a modern software development ecosystem. Developers have been brought on the threat modeling journey in some enterprise environments, sometimes working side-by-side with their AppSec counterparts. After all, they know their code best, and if they are security-aware, they are well-positioned to identify potential security weaknesses that could be exploited. Even today, this setup is relatively rare, and many companies do not engage the development cohort for these activities. The primary reasons tend to vary, but generally, it comes down to a combination of the following: Low security proficiency: With secure coding best practices not a feature of many tertiary degrees, and on-the-job training sporadic at best, many developers are not equipped with the knowledge, tools and skills required to assist in threat modeling. Slow and manual processes: Even if a senior, security-skilled developer is the right person for the job, traditional threat modeling processes are tedious, manual and rarely integrate well into a development workflow. This can drive good developers away from participating in these tasks, often seeing them as low-value and at odds with the KPIs they are typically measured against. Outdated tools and processes: It’s a harsh reality that, by the time a threat model is completed, it is likely already outdated. Static threat models tend to have limited value in enterprise environments for this reason. The cybersecurity skills shortage has done the industry no favors in maintaining meticulous security standards as the volume of code produced year-on-year increases, especially given the widespread use of AI. However, this is the greatest opportunity yet for developers to grow their security proficiency, leverage the AI tools already integrated into their workflow, and ultimately become the first line of defense against threat actors who have, to date, proved elusive to defend against. Evolving Attacks Require Evolving Defenses The threat landscape today is infinitely more dangerous than at any other time in history. We live in an era where bots are commonplace, and they can probe millions of networks for vulnerabilities every second. This is coupled with the fact that there are billions of Internet of Things (IoT) devices deploying around the world with limited or no internal security. Even human attackers are becoming increasingly sophisticated and well-trained, and with AI support, cyberattacks are becoming more potent and automated. They also operate in groups that share intelligence and may be supported and financed by organized crime, or nation-states. Security professionals, even armed with automation tools, can’t hope to continue to predict every possible avenue of attack. It’s like trying to hold back the ocean’s tide with a bucket. In that scenario, the size of the bucket is inconsequential. Modern threat modeling takes a more holistic, developer-focused approach, and it is made far more seamless with the right AI tooling. We know that AppSec teams can’t keep up with threats from ground zero anymore. Instead, security experts are increasingly recommending that we need to shift threat modeling away from the beachheads of our production environments and back into the development process. This really gets at the core of what threat modeling was supposed to do in the first place, to prevent threats from even launching by not giving attackers any leverage to work. Getting Started With AI Threat Modeling Getting started with this new threat modeling collaboration effort may require small steps at first. It might start with group meetings involving security awareness personnel, and include developers who have shown an aptitude for security by completing foundational education pillars that allow them to navigate common security bugs and misconfigurations. It should also include a plan for everyone to work under the same set of tools for easier communication and information sharing, and a quicker response once vulnerabilities are discovered. Once that is accomplished, and developers and AppSec professionals see and respect each other as equal and supportive colleagues, they can move into more advanced threat modeling tactics, assisted by approved AI tools. 67% of security researchers already leverage LLMs in their threat modeling, yet only 7% of companies use them frequently for this purpose, despite their significant potential. To be clear, utilizing LLMs for threat modeling is a brilliant starting foundation, but they are too prone to hallucination and too lacking in nuance and contextual understanding to be the absolute final word on risk. This is why developers must be grounded in security best practices, continuously upskilled, and their AI tooling traceable. They are best wielded by security-skilled devs, and can generally perform well in: Providing efficient, actionable intelligence as features are being built; Reducing context-switching, since many popular IDEs feature integrated AI tools; Delivering guidance in developer-centric language that relates to the work they are doing; Building a “breaker” mindset; something that, to date, has been elusive for software “builders”; Assisting in creating guardrails for AI code generation and boilerplates. Ultimately, it is imperative to recognize that while AI cannot replace human intuition, its integration into the threat modeling process serves as a powerful catalyst for modern development. When well-trained developers leverage AI to handle the heavy lifting of pattern recognition and rapid analysis, they bridge the gap between abstract risk and actionable defense. Integrating these intelligent tools into the early stages of the lifecycle does more than just patch holes; it cultivates a culture of proactive resilience. By augmenting human empathy with machine precision, teams can secure their codebases at scale, ensuring that efficiency and security are no longer at odds, but are instead the foundational pillars of a streamlined workflow.
