Apr 1, 2025, 12:00 AM
Apr 1, 2025, 12:00 AM

AI-generated explicit images discovered, raising concerns over child safety

Provocative
Highlights
  • Fowler found explicit AI-generated images linked to South Korean company AI-NOMIS.
  • The images included children and celebrities as subjects, raising ethical concerns.
  • The incident underscores the need for effective regulations against harmful AI use.
Story

In March 2025, a significant discovery was made in an AI-generated image service linked to South Korean company AI-NOMIS. The whistleblower, an independent researcher named Fowler, uncovered an S3 cloud storage bucket containing explicit AI-generated images of children and celebrities depicted as children. The situation arose from GenNomis, a web app marketed as a nudify service, which allowed users to face-swap images without participants’ consent. This illicit content raised serious ethical questions and highlighted the lack of proper moderation and guardrails to prevent such abuses of technology. The contents of the bucket contained normal pictures of women, suggesting that these images were potentially used to create non-consensual pornographic content using generative AI, reinforcing the vulnerable position of individuals being exploited. Fowler documented his findings and promptly reported the situation to the GenNomis and AI-NOMIS teams on March 10, 2025. The immediate response from the companies was to take the content down without any communication, indicating an alarming level of silence about such serious issues. Despite a warning on the site about possible legal action against generating explicit content and offering measures to improve abuse prevention, the quick disappearance of the problematic images raised further concerns regarding accountability and proactive measures in the industry. Fowler described the experience as a rare behind-the-scenes look at AI image generation, highlighting the persistent issue of explicit content that, although computer-generated, remains illegal and unethical. His findings revealed how AI technology can be manipulated for harmful purposes, bringing attention to the broader implications of deepfake technology and its potential for exploitation. As news of this incident spread, governments and organizations began to address the growing concern regarding AI-generated explicit images. In fact, a law bill aimed at tackling sexually explicit deepfakes had already passed the Senate, reflecting a growing awareness of the potential danger such technologies pose to individuals. In late 2024, several major tech companies signed a pledge to prevent the use of their products for non-consensual deepfake pornography and child sexual abuse material, showcasing the collective effort to mitigate harm associated with misuse of AI technologies. However, as Fowler's discovery showed, there is still a considerable demand for this type of content, which will continue to challenge industry standards and regulations.

Opinions

You've reached the end