AI Pulse: Election Deepfakes, Disasters, Scams & more

Source: In a futuristic crime-solving scenario set in the 1980s, a futuristic detective is faced with the task of identifying advanced cyborgs that are virtually indistinguishable from humans.

AI Pulse: Election Deepfakes, Disasters, Scams & more

Source: In a futuristic crime-solving scenario set in the 1980s, a futuristic detective is faced with the task of identifying advanced cyborgs that are virtually indistinguishable from humans. The detective’s mission involves LOCATING these unique beings.

From the onset, critics have raised concerns about the impending threat of AI. They have consistently highlighted that as AI technology advances, it will progressively become more challenging for individuals or machines to differentiate between real and AI-generated content such as documents, images, or recordings.

The detection of certain AI deepfakes is already proving to be a significant challenge. In a notable incident in September, the Chairman of the U.S. Senate Foreign Relations Committee scheduled a video call with an individual whom he believed to be a genuine contact from Ukraine. However, the email he had received was deceptive, and the subsequent video call, which seemingly featured the actual foreign official, turned out to be an AI-driven hoax. Upon realizing that the discussion was veering into politically sensitive topics, the Chairman and his team became suspicious and terminated the call.

We are all potential targets
It’s crucial to acknowledge that public figures are not the sole targets of synthetic media scams. According to data revealed by Trend Micro to Dark Reading during the previous summer, 80% of consumers have encountered deepfake images, 64% have come across deepfake videos, and 35% have personally been exposed to deepfake scams.

Educating individuals about the existence of deepfakes and other AI-generated threats is evidently crucial. Nonetheless, as pointed out by Trend’s Shannon Murphy, human eyesight may not be detailed enough to spot these fabricated contents. Hence, the integration of technology-based tools becomes imperative, both for the identification and detection of AI-originated content that intentionally conceals its AI origin.

Revealing AI’s existence
Regarding the ‘AI identifier’ aspect, one widely endorsed approach is the utilization of digital watermarks: identifiable patterns embedded within AI-generated content that can be perceived by machines. The Brookings Institute has mentioned that while these watermarks are efficacious, they are not entirely immune to tampering and can be challenging to standardize while retaining reliability.

Microsoft is actively implementing a similar strategy with Content Credentials, a system that enables creators and publishers to certify their work cryptographically and utilize metadata to establish authorship, creation time, and the involvement of AI. The Content Credentials framework complies with the standards set by C2PA and is adaptable for use with images, videos, and audio files.

On the other hand, OpenAI is primarily focused on enhancing AI recognition capabilities. As per Venture Beat, their GPT-4o technology aims to thwart deepfakes by identifying content originating from generative adversarial networks (GANs), conducting audio and video anomaly checks, confirming voice authenticity, and ensuring synchronization between audio-visual elements—for instance, matching mouth movements and breathing patterns in videos.

The ultimate solution lies in comprehensive defense
The ongoing emergence of deepfakes and other AI threats will continue to challenge our perception and impact various institutions. While vigilant individuals, AI identifiers, and analytical AI detection technologies serve as critical defense mechanisms, each of these has limitations, necessitating the pursuit of additional measures. Adopting zero-trust models is essential to orient organizations and processes towards enhanced protection by adhering to a “trust nothing, verify everything” approach. It is essential to assess the risks associated with actions based on digital content before taking any definitive steps.

By amalgamating the aforementioned strategies with legal and regulatory frameworks, we can establish a robust defense-in-depth strategy that offers the most effective shield against AI-driven threats.

Additional insights from Trend Micro

Explore further resources:

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.