Hackers Use LLM to Create React2Shell Malware, the Latest Example of AI-Generated Threat
A malware sample recently caught in security vendor Darktrace’s CloudyPots honeypot network was entirely generated by AI and built to exploit the widespread and maximum-security React2Shell vulnerability that was disclosed two months ago and cont
Why are experts optimistic about future AI security technologies
A malware sample recently caught in security vendor Darktrace’s CloudyPots honeypot network was entirely generated by AI and built to exploit the widespread and maximum-security React2Shell vulnerability that was disclosed two months ago and continues to be a threat.The unnamed malware that aims to gain initial access into a system and then mine cryptocurrency is the latest in a growing list of threats that appear to be mostly or entirely created using large language models (LLMs) and other AI tools, according to Nathaniel Bill, malware research engineer with Darktrace, and Nathaniel Jones, vice president of security and AI strategy and field CISO with the vendor.“As AI‑assisted software development (‘vibecoding’) becomes more widespread, attackers are increasingly leveraging large language models to rapidly produce functional tooling,” Bill and Jones wrote in a report this week. “This incident illustrates a broader shift: AI is now enabling even low-skill operators to generate effective exploitation frameworks at speed.”The AI used let the attacker “generated a functioning exploit framework and compromise more than ninety hosts, demonstrating that the operational value of AI for adversaries should not be underestimated,” they added.The malware, which generated a container named “python-metrics-collector,” is another recent example of threat actors moving beyond using AI to create more realistic phishing emails and deepfakes or to compromise AI models through prompt injections or attack methods. Now they’re using LLMs and other tools to generate the malicious code itself, and quickly.Rise of AI-Created MalwareAI vendor Anthropic in November 2025 said that Chinese nation-state actors used its Claude Code developer AI tool to automate 80% to 90% of a cyberespionage campaign, while Check Point researchers last month wrote about VoidLink, a malware in early development designed and in the processing of being built by one person using AI.Sysdig researchers this month said hackers in November used LLMs throughout an operation in which they were able to move from initial access into an Amazon Web Services (AWS) environment to obtaining administrative privileges in eight minutes.Darktrace’s Bill and Jones wrote that “CISOs and SOC [security operations center] leaders should treat this [latest] event as a preview of the near future.”“Threat actors can now generate custom malware on demand, modify exploits instantly, and automate every stage of compromise,” they wrote. “Defenders must prioritize rapid patching, continuous attack surface monitoring, and behavioral detection approaches. AI‑generated malware is no longer theoretical – it is operational, scalable, and accessible to anyone.”Trapped in a Docker HoneypotDarktrace researchers were able to capture the malware in their Darktrace Docker honeypot, which lures attackers by exposing the Docker daemon and making it internet-facing with no authentication. Once in, the malware downloaded a list of Python packages and then downloaded and ran a Python script. A link redirects the target to a GitHub Girst hosted by user with the handle “hackedyoulol,” who has since been banned from GitHub.“Notably the script did not contain a docker spreader – unusual for Docker-focused malware – indicating that propagation was likely handled separately from a centralized spreader server,” they wrote.The obfuscated Python payload includes script that include a multi-line comment, “Network Scanner with Exploitation Framework, Educational/Research Purpose Only, Docker-compatible: No external dependencies except requests.”Following the AI CluesBill and Jones noted that the lines are an important clue to the use of AI, writing that most samples analyzed don’t include this level of commentary in the files because they tend to be designed to be difficult to understand as a way to make it difficult to analyze.“Quick scripts written by human operators generally prioritize speed and functionality over clarity,” they wrote. “LLMs on the other hand will document all code with comments very thoroughly by design, a pattern we see repeated throughout the sample. Further, AI will refuse to generate malware as part of its safeguards.”In addition, the phrase “Educational/Research Purpose Only” suggests the hacker likely framed a malicious request to jailbreak an AI model, they wrote, adding that using AI-detection software against portions of the script indicated an LLM likely generated the code.“The script is a well constructed React2Shell exploitation toolkit, which aims to gain remote code execution and deploy a XMRig (Monero) crypto miner,” they wrote. “It uses an IP‑generation loop to identify potential targets and executes a crafted exploitation request.”Accessible CybercrimeThe hackers weren’t pulling a lot of money, about $1.81 a day, the researchers wrote.“While the amount of money generated by the attacker in this case is relatively low, and cryptomining is far from a new technique, this campaign is proof that AI based LLMs have made cybercrime more accessible than ever,” Bill and Jones wrote.Christopher Jess, senior R&D manager at cybersecurity company Black Duck, said there’s nothing new about the attack, vulnerability, or exploit. More interesting is the significant reduction in effort needed to create an end-to-end intrusion chain.“Coding agents and LLMs are compressing the attacker time-to-tooling, enabling lower-skilled operators to produce functional and adaptable exploit frameworks at a velocity defenders must assume will only increase,” Jess said. “When a simple prompting session yields functional exploitation code, organizations must expect more frequent, more customized, and more opportunistic attacks.”A New ‘Cold Reality’Acalvio CEO Ram Varadarajan said that the “cold reality we are facing today is that AI will turn every cyber-hacker into a supervillain.”Such attacks will continue and be even more difficult to detect, Varadarajan said.“Frankly, operators will have no other option than to assume ‘breach as baseline’ – that is, assume always that the bad guys are inside your firewall. The best defense here will be AI-tuned tripwires, in everything from honeypots to game theory. Organizations will need deception techniques that leverage the algorithmic behavior that offensive AI models bring, to impel those intruders to blunder into an ambush. That’s our future.”
