OpenAI Thwarts 20 Global Malevolent Campaigns Using AI for Cybercrime and Misinformation
OpenAI declared on Wednesday that it has thwarted over 20 schemes and deceitful networks worldwide that tried to exploit its platform for harmful agendas since the beginning of this year.
The actions involved rectifying malware, composing pieces for sites, generating life stories for social media profiles, and producing AI-formulated profile pictures for counterfeit accounts on X.
“Malicious actors persist in refining and testing our models, yet we have not observed any indication of substantial advancements in their capacity to produce entirely new malware or establish widespread audiences,” the artificial intelligence (AI) enterprise stated.
Furthermore, it disclosed efforts to obstruct activities that fabricated social media content relevant to elections in the U.S., Rwanda, and to a lesser extent India and the European Union, with none of these networks attracting viral interaction or maintaining audiences.
This encompassed endeavors carried out by an Israeli business entity titled STOIC (also known as Zero Zeno) that constructed social media remarks about Indian elections, as mentioned previously by Meta and OpenAI earlier in May.
A few of the cyber campaigns highlighted by OpenAI are as follows –
- SweetSpecter, an alleged China-based adversary that used OpenAI’s services for LLM-informed intelligence gathering, vulnerability analysis, coding assistance, evasion of anomaly detection, and tool development. It was also noted attempting unsuccessful spear-phishing on OpenAI personnel to distribute the SugarGh0st RAT.
- Cyber Av3ngers, a faction linked with the Iranian Islamic Revolutionary Guard Corps (IRGC) employed its models for researching programmable logic controllers.
- Storm-0817, an Iranian threat actor utilized its models to rectify Android malware capable of harvesting sensitive data, tools for extracting Instagram profiles through Selenium, and transforming LinkedIn profiles into Persian.
Additionally, the company remarked on blocking multiple clusters, including an influence scheme known as A2Z and Stop News, of accounts that devised English- and French-language content to be subsequently shared on various sites and social media networks.
“[Stop News] demonstrated an uncommonly high output of visual content,” researchers Ben Nimmo and Michael Flossman noted. “Numerous web articles and tweets were accompanied by images produced using DALL·E. These images featured a cartoonish style, vibrant color schemes, or emotive tones to draw attention.”
Two other networks identified by OpenAI Bet Bot and Corrupt Comment have been discovered utilizing their API to generate discussions with users on X and sending them links to betting websites, as well as producing comments that were then posted on X, respectively.
The disclosure comes nearly two months after OpenAI prohibited a group of accounts associated with an Iranian covert influence operation named Storm-2035 that exploited ChatGPT to generate content that, among other things, centered on the imminent U.S. presidential election.
“Malicious actors commonly exploited our models to execute tasks in a distinct, intermediate phase of their operations — after acquiring fundamental tools like internet access, email addresses, and social media accounts, but prior to deploying ‘finalized’ products such as social media posts or malware across the internet through various distribution channels,” Nimmo and Flossman elaborated.
Cybersecurity firm Sophos, in a recent report released last week, mentioned that generative AI could be exploited for disseminating specialized misinformation through micro-targeted emails.
This involves misusing AI models to fabricate political campaign websites, AI-generated personas spanning the political spectrum, and email messages that target them specifically based on campaign talking points, thereby enabling a new degree of automation to propagate misinformation on a large scale.
“This implies that a user could produce anything ranging from innocuous campaign materials to deliberate misinformation and malicious intimidations with minimal reconfiguration,” researchers Ben Gelman and Adarsh Kyadige pointed out.
“It is conceivable to associate any genuine political movement or candidate with endorsing any policy, even if they do not actually support it. Intentional misinformation of this sort can compel individuals to side with a candidate they do not actually support or oppose one they previously favored.”


