AI’s role in global digital fraud exposed by BioCatch study

A new report commissioned by BioCatch, has revealed the significant impact of artificial intelligence (AI) on digital fraud and financial crime worldwide.

<div>AI's role in global digital fraud exposed by BioCatch study</div>

A new report commissioned by BioCatch, has revealed the significant impact of artificial intelligence (AI) on digital fraud and financial crime worldwide. The study canvassed the views of 600 fraud-management decision-makers, anti-money laundering (AML) professionals, and risk and compliance leaders at global financial institutions, including 100 Australians, and found that AI is contributing substantially to the vulnerability of digital identities.

Key findings from the survey include 74% of worldwide organisations currently utilising AI for financial crime detection, with only 54% of Australian organisations doing the same. Additionally, 58% of Australian professionals believe that criminals are more advanced at using AI for financial crimes than banks are at employing AI to detect such crimes. Worryingly, only 44% of Australian organisations segregate the management of fraud and financial crime, with no cross-collaboration. Meanwhile, 41% of Australian officials anticipate that deepfake videos will represent the biggest threat to their organisations by 2024.

Tom Peacock, BioCatch Director of Global Fraud Intelligence, explains the dangerous potential of AI: “Artificial intelligence can supercharge every scam on the planet, flawlessly localising the language, slang, and proper nouns used and personalising for every individual victim the scam type, images, audio, and/or video involved. AI gives us scams without borders and will require financial institutions to adopt new strategies and technologies to protect their customers.”

Another worrying trend revealed by the report is the growing use of voice-cloning AI technologies. A startling 84% of Australian respondents (and 91% globally) have reported their organisation is now rethinking the use of voice-verification for high-profile customers due to worries about the AI’s voice-cloning abilities. More than half of the organisations represented in the survey admitted losing between $5 million and $25 million to AI-powered attacks in 2023.

BioCatch’s Chief Marketing Officer, Jonathan Daly, pointed out the necessity of new authentication methods amid the unfolding AI era, arguing that “the AI era requires new senses for authentication,” such as behavioural intent signals, which have proven ability to “sniff out deepfakes and voice-clones in real time to keep people’s hard-earned money safe.”

Interestingly, despite these challenges, many respondents reported their organisation’s uses of AI. Close to three-quarters of those surveyed confirmed that their employer used AI to detect fraud and/or financial crime, with 80% of Australians in the survey asserting that AI has increased their organisation’s response speed to potential threats.

Despite the prevalence of siloed fraud and financial crime teams, with over 40% of respondents reporting their company handled these issues in uncooperative, separate departments, there is a growing consensus among nearly 90% of respondents in favour of greater information sharing between financial institutions and government authorities to combat fraud and financial crime.

One thing is clear from the report: crimes involving AI pose a significant and growing threat to the global financial sector, necessitating a rapid and collaborative response to fortify digital identities, the researchers share.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.