AI advancements being exploited for email cyberattacks

There has been remarkable progress in the field of generative artificial intelligence (AI) in the past year, with ChatGPT reaching its first 1 million users merely five days after it was launched.

AI advancements being exploited for email cyberattacks

There has been remarkable progress in the field of generative artificial intelligence (AI) in the past year, with ChatGPT reaching its first 1 million users merely five days after it was launched. But the spread of generative AI has also provided cybercriminals with an opportunity to exploit its capabilities for conducting advanced cyberattacks, according to Mike Britton, CISO of Abnormal Security.

One clear example of this is the use of AI to generate malicious emails. Many cybercriminals traditionally used formats or templates to launch their campaigns. This would result in a large number of similar attacks that could be easily tracked and detected by traditional security systems. But generative AI has rendered this approach ineffective by offering a quick and efficient way to create unique email content, making it significantly harder to detect such threats.

Generative AI can also be used to make social engineering attacks and email threats more sophisticated. For example, tools like the ChatGPT API could be misused to craft realistic phishing emails, polymorphic malware, and convincing fraudulent payment requests. Even though Open AI has limited the production abilities of ChatGPT, there has been a rise in the creation of malicious AI such as WormGPT and FraudGPT by cybercriminals, which lack the ethical regulations meant to discourage inappropriate use.

Within the past year, Abnormal Security has detected several instances of likely AI-generated cyberattacks, using tools such as CheckGPT to identify AI involvement. In one such case, an attacker impersonated an insurance company to deliver malware. Posing as a representative from ‘Customer Benefits Insurance Group’, the attacker sent an email supposedly containing benefits information and an enrollment form. Failure to fill it would result in losing coverage, stated the email. However, the attachment was suspected to contain malware, potentially putting the recipient’s system at risk.

In another scenario, an attacker forged a Netflix customer service representative’s identity to conduct a credential phishing attack. The email claimed that the recipient’s Netflix subscription was due to expire and they needed to renew it from a provided link. But the link led to a malicious site intended for stealing sensitive information. In this particular attack, the attacker used an authentic-looking helpdesk domain related to an online toy shopping app to appear legitimate.

Abnormal Security also reported an attempted invoice fraud where the attacker posed as a manager from cosmetics company LYCON. The attacker claimed system irregularities and requested access to open or overdue invoices, warning the recipient to stop payments to previous accounts. The ultimate aim was to reroute payments to the attacker’s account.

This trend reflects a growing concern for the ease with which AI-generated email attacks can bypass traditional security measures. Typically text-based, these attacks rely on social engineering and use legitimate email service providers to land right into employee inboxes. Consequently, an individual must decide whether to engage with it or not. Moreover, the absence of grammatical errors and typos, common indicators of a scam email, makes it increasingly difficult for people to identify them as threats.

Despite only being used widely for a year, it’s apparent that generative AI is a powerful tool with significant potential for misuse. Security leaders must, therefore, prioritise cybersecurity measures against these threats before they escalate. It’s becoming increasingly clear that the only defence against AI-led attacks is an AI-native solution that leans on known good rather than known bad. By understanding the identity of individuals within an organisation, the context of communication, and the email content, these AI-native solutions can protect against threats that traditional security measures struggle with.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.