Numerous of graphic AI-crafted visuals, including AI-generated content of child sexual abuse, were uncovered and made accessible online, as revealed by recent research findings reported by WIRED. An unsecured database linked to an AI image creation company stored over 95,000 entries, consisting of various prompts and pictures of famous personalities like Ariana Grande, the Kardashians, and Beyoncé digitally altered to appear as minors.
Discovered by security analyst Jeremiah Fowler, the exposed database, affiliated with the South Korea-based GenNomis website, contained a massive amount of data, mainly comprising AI-generated images, totaling more than 45 gigabytes.
The leaked data sheds light on the potential misuse of AI image-generation tools to produce harmful and likely non-consensual explicit content involving adults as well as child sexual abuse imagery. The proliferation of “deepfake” and “nudify” platforms has led to the wrongful targeting of numerous women and girls with manipulative visual material, accompanied by a surge in AI-generated child sexual abuse content.
Fowler, commenting on the data breach, expressed concern about the ease with which such content could be created, emphasizing the significant risks it poses both from a security perspective and as a parent.
Upon discovering the unprotected cache of files in early March, Fowler promptly reported the issue to GenNomis and AI-Nomis, highlighting the presence of AI-generated child sexual abuse material. Although GenNomis swiftly secured the database, they did not acknowledge or reach out to Fowler regarding his findings.
Despite multiple attempts, GenNomis and AI-Nomis did not respond to inquiries from WIRED. However, shortly after WIRED contacted them, the websites of both companies were apparently taken offline, with the GenNomis site now returning a 404 error page.
Law professor Clare McGlynn from Durham University addressed the concerning market demand for AI technologies facilitating the production of abusive imagery, emphasizing the prevalence of such activities beyond just isolated instances.
Before its removal, GenNomis featured various AI tools on its platform, including an image generator for user-defined prompts and transformations, as well as tools for face-swapping, background removal, and converting videos to images.
Regarding the content stored on GenNomis, Fowler mentioned the presence of adult-themed AI visuals and potential face-swapping images within the database. The platform hosted explicit AI imagery featured prominently on its site, with a section dedicated to sexualized depictions of women, ranging from realistic to animated styles.
GenNomis enforced user guidelines requiring respectful content and prohibiting violence and hate speech, explicitly banning child pornography and other illegal activities. The platform’s policies suggested strict measures against such content with immediate account termination for violators. (The shift from “child pornography” to the term CSAM has gained traction in various sectors over the past decade).
The moderation protocols deployed by GenNomis to prevent the creation of AI-generated CSAM remain unclear. Some users had previously reported constraints on generating sexual content or dark humor prompts, indicating potential issues with the platform’s content oversight.
Fowler noted the prevalence of “face-swap” images created from real photographs stored in the database, spotlighting the concerning practice of digitally manipulating visuals without consent.
