A Database Leak from an AI Image Generator Reveals the Real Use Cases

Numerous of graphic

An AI Image Generator’s Exposed Database Reveals What People Really Used It For

Numerous of graphic AI-crafted visuals, including AI-generated content of child sexual abuse, were uncovered and made accessible online, as revealed by recent research findings reported by WIRED. An unsecured database linked to an AI image creation company stored over 95,000 entries, consisting of various prompts and pictures of famous personalities like Ariana Grande, the Kardashians, and Beyoncé digitally altered to appear as minors.

Discovered by security analyst Jeremiah Fowler, the exposed database, affiliated with the South Korea-based GenNomis website, contained a massive amount of data, mainly comprising AI-generated images, totaling more than 45 gigabytes.

The leaked data sheds light on the potential misuse of AI image-generation tools to produce harmful and likely non-consensual explicit content involving adults as well as child sexual abuse imagery. The proliferation of “deepfake” and “nudify” platforms has led to the wrongful targeting of numerous women and girls with manipulative visual material, accompanied by a surge in AI-generated child sexual abuse content.

Fowler, commenting on the data breach, expressed concern about the ease with which such content could be created, emphasizing the significant risks it poses both from a security perspective and as a parent.

Upon discovering the unprotected cache of files in early March, Fowler promptly reported the issue to GenNomis and AI-Nomis, highlighting the presence of AI-generated child sexual abuse material. Although GenNomis swiftly secured the database, they did not acknowledge or reach out to Fowler regarding his findings.

Despite multiple attempts, GenNomis and AI-Nomis did not respond to inquiries from WIRED. However, shortly after WIRED contacted them, the websites of both companies were apparently taken offline, with the GenNomis site now returning a 404 error page.

Law professor Clare McGlynn from Durham University addressed the concerning market demand for AI technologies facilitating the production of abusive imagery, emphasizing the prevalence of such activities beyond just isolated instances.

Before its removal, GenNomis featured various AI tools on its platform, including an image generator for user-defined prompts and transformations, as well as tools for face-swapping, background removal, and converting videos to images.

Regarding the content stored on GenNomis, Fowler mentioned the presence of adult-themed AI visuals and potential face-swapping images within the database. The platform hosted explicit AI imagery featured prominently on its site, with a section dedicated to sexualized depictions of women, ranging from realistic to animated styles.

GenNomis enforced user guidelines requiring respectful content and prohibiting violence and hate speech, explicitly banning child pornography and other illegal activities. The platform’s policies suggested strict measures against such content with immediate account termination for violators. (The shift from “child pornography” to the term CSAM has gained traction in various sectors over the past decade).

The moderation protocols deployed by GenNomis to prevent the creation of AI-generated CSAM remain unclear. Some users had previously reported constraints on generating sexual content or dark humor prompts, indicating potential issues with the platform’s content oversight.

Fowler noted the prevalence of “face-swap” images created from real photographs stored in the database, spotlighting the concerning practice of digitally manipulating visuals without consent.

In addition to CSAM, Fowler highlighted the existence of adult-themed AI-generated pornographic images along with potential face-swapping content in the leaked database. The files contained what seemed to be genuine photographs of individuals, likely utilized in creating illicit AI-generated nudity or sexually explicit images by replacing faces.

While operational, the GenNomis website permitted the creation of explicit AI adult content, prominently featuring such material on its platform. The site’s offerings included realistic and animated sexualized portrayals of women, a “NSFW” gallery, and a marketplace for sharing and possibly selling albums of AI-generated imagery.

GenNomis stated in its user policies that only respectful content was permitted, explicitly prohibiting violence, hate speech, and illegal activities, with a special emphasis on banning child pornography. The platform emphasized account termination for users engaging in prohibited activities.

The extent to which GenNomis used moderation tools to prevent AI-generated CSAM remains uncertain. User feedback on the platform suggested instances of restrictions placed on explicit content generation and concerns regarding potentially objectionable material.

The exposure highlights the urgency and importance of safeguarding against the unauthorized manipulation of visual content, especially concerning the misuse of AI technologies to generate explicit and harmful imagery.

It’s evident from these incidents that comprehensive measures must be implemented to prevent illicit content creation through AI tools, emphasizing the critical need for responsible usage and oversight of AI technologies to curb potential misuse and abuse.

the essential measures to restrict that content,” Fowler claims regarding the database.

Henry Ajder, an expert in deepfakes and founder of consultancy Latent Space Advisory, mentions that even if the generation of harmful and unlawful content was not authorized by the company, the website’s branding—mentioning “unrestricted” image creation and a “NSFW” section—signaled there might be a “clear link with intimate content lacking safety precautions.”

Ajder expresses surprise over the connection of the English-language website with a South Korean establishment. Last year, the nation faced a non-consensual deepfake “crisis” targeting girls, prior to implementing measures to combat the surging trend of deepfake abuse. Ajder emphasizes the need for increased pressure across all sectors enabling non-consensual imagery production via AI.

Fowler points out that the database also revealed files containing AI cues. The exposed data did not include any user-specific information like logins or usernames, as confirmed by the researcher. Screenshots of these cues revealed the usage of terms like “small,” “female,” and references to incestuous activities. The cues also contained sexual scenarios involving famous personalities.

“It appears that the technology has outpaced all guidelines or restrictions,” reflects Fowler. “While it is known that content depicting minors in explicit situations is illegal, the advancement in technology has not prevented the creation of such content.”

With the significant advancements in generative AI systems making image creation and modification more accessible in recent years, there has been a surge in AI-generated CSAM. “Web pages featuring AI-generated child sexual content have seen a more than four-fold increase since 2023, with the realism of this distressing content also advancing in sophistication,” notes Derek Ray-Hill, the interim CEO of the Internet Watch Foundation (IWF), a UK-based non-profit fighting online CSAM.

The IWF has detailed how criminals are now increasingly deploying AI to produce CSAM and enhancing their techniques for its production. Ray-Hill emphasizes that it has become far too effortless for offenders to utilize AI for generating and disseminating sexually explicit material involving children on a large scale and rapid pace.”

About Author

Tags: , ,

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.