Deepfakes Can Deceive Facial Recognition on Digital Currency Exchanges

Artificially generated deepfakes can fuel false information or modify pictures of real individuals for questionable reasons. They can also aid malicious actors evade two-factor authentication, as indicated by an Oct.

Deepfakes Can Fool Facial Recognition on Crypto Exchanges

Artificially generated deepfakes can fuel false information or modify pictures of real individuals for questionable reasons. They can also aid malicious actors evade two-factor authentication, as indicated by an Oct. 9 research study from Cato Networks’ CTRL Threat Research.

AI produces videos of fictional individuals gazing into a camera

The malicious actor analyzed by CTRL Threat Research — recognized by the alias ProKYC — employs deepfakes to fabricate government identifications and trick facial recognition systems. The offender vends the tool on the dark web to hopeful swindlers, whose main target is to breach digital currency exchanges.

Some exchanges mandate a prospective account holder to both present a government ID and be visible live on video. Through generative AI, the assailant effortlessly constructs a lifelike image of a person’s countenance. ProKYC’s deepfake utility then inserts that image into a counterfeit driver’s license or passport.

The facial recognition evaluations of crypto exchanges necessitate short evidence that the individual is standing in front of the camera. The deepfake tool mimics the camera and produces an AI-engineered picture of a person gazing left and right.

SEE: Meta is the newest AI titan to devise tools for realistic video content.

The offender then sets up an account on the digital currency exchange utilizing the identity of the created, non-existent individual. Subsequently, they can exploit the account to launder illegally acquired funds or engage in other varieties of fraud. This form of attack, dubbed New Account Fraud, resulted in $5.3 billion in damages in 2023, as per Javelin Research and AARP.

Vending methods to breach networks isn’t novel: ransomware-as-a-service arrangements enable ambitious offenders to purchase entry into systems.

Strategies to avert new account fraud

Cato Research’s Principal Security Strategist Etay Maor presented several guidelines for companies to block the formation of deceptive accounts through AI:

  • Enterprises ought to scrutinize for common characteristics of AI-produced videos, like extremely high-quality videos — AI has the capacity to generate images with superior sharpness than what is ordinarily captured by a regular webcam.
  • Observe or scan for defects in AI-engineered videos, notably abnormalities around eyes and lips.
  • Gather threat intelligence data from all corners of your organization in general.

Finding a balance between excessive or insufficient scrutiny can be complicated, Maor expounded in the Cato Research study. “As mentioned previously, establishing biometric authentication mechanisms that are extremely restrictive can yield numerous false-positive alerts,” he stated. “Conversely, lenient controls can culminate in fraud.”

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.