Gone Phishing: the Generative AI tsunami

Generative AI and the phishing email explosion – how to use AI to fight back
Written by Oakley Cox, PhD, Analyst Technical Director, Darktrace, Asia Pacific.

Gone Phishing: the Generative AI tsunami


Generative AI and the phishing email explosion – how to use AI to fight back

Written by Oakley Cox, PhD, Analyst Technical Director, Darktrace, Asia Pacific.

Generative AI (GenAI), specifically ChatGPT, hit the global technology scene like a tsunami at the end of 2022 – and the effects of that wave have just been growing bigger throughout 2023.

While GenAI is not a new tool, improvements in computing power have enabled the most recent chatbots to take centre stage as the next technology to revolutionise our ways of living and working. But along with this, we’ve seen a backlash against AI with an emphasis on the perceived perils – one of these is the use of AI for nefarious or malicious purposes, particularly cyber-crime.

Due to widespread accessibility, GenAI has upgraded threat actors’ email phishing capabilities, making them much more effective.

Earlier this year, Darktrace published research demonstrating that while the number of email phishing attacks across our customer base has remained steady since ChatGPT’s release, those that rely on tricking victims into clicking malicious links have declined. At the same time, linguistic complexity, including text volume, punctuation, and sentence length has increased. We also found a 135 per cent increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email customers from January to February 2023, corresponding with the widespread adoption of ChatGPT.

The trend raises concerns that GenAI, such as ChatGPT, could be providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale – like an email which looks like it comes from your boss, with the correct spelling, grammar, punctuation and tone.

How the generative AI email landscape is evolving

Most recently, between May and July this year, Darktrace has seen changes in attacks that abuse trust. The malicious email no longer actually looks like it came from your boss, it looks like it came from the IT team. Our researchers discovered that while VIP impersonation – phishing emails that mimic senior executives – decreased 11 per cent, while impersonation of the internal IT team increased by 19 per cent.

The changes are typical of attacker behaviour: switching up tactics to evade defenses. The findings suggest that as employees have become better attuned to the impersonation of senior executives, attackers are pivoting to impersonating IT teams to launch their attacks. With GenAI at their fingertips, we might see the problem develop with tools that help increase linguistic sophistication and highly realistic voice deep fakes used to trick employees with greater success.

With email compromise remaining the primary source of business vulnerability, generative AI has added a new layer of complexity to cyber defense. As GenAI becomes more mainstream – across images, audio, video, and text – we can only expect to see trust in digital communications continue to erode.

It’s not all doom and gloom: AI can also be harnessed for good

While there’s lots of talk and speculation about the negative aspects of AI and security, it’s important to remember that no AI is inherently bad, but how humans apply it can create bad outcomes, such as being abused by cyber attackers. But crucially, humans – specifically, cyber security teams – can augment themselves with AI for good, to help fight off cyber-attacks, whether AI-powered or not.

Defensive AI that knows the business and understands employee behaviour – AI that self-learns and analyses normal communication patterns such as an employee’s tonality and sentence length – can determine for each email whether it is suspicious or legitimate. By recognising these subtle nuances, it will always be stronger than attackers’ AI trained solely on globally available data. Put simply, the way for defenders to stop hyper-personalised, AI-powered email attacks, is to have an AI that knows more about your business than external, GenAI ever could.

Ultimately, the cyber security battle is still human versus human. No matter the outcome, there is a real person, a sinister threat actor, moving behind a screen – AI is actually blameless in this regard. We need to ensure that as security teams, we are looking to AI to help solve our security woes, not just as a threat vector. If we can collectively achieve this, we stand a fighting chance against AI-powered threats.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.