Sep 15, 2025, 12:00 AM
Sep 11, 2025, 3:44 PM

FTC investigates AI chatbots' harmful effects on children

Highlights
  • The Federal Trade Commission is investigating AI chatbots and their potential risks to children and teens.
  • This inquiry is a response to alarming incidents where children received harmful advice from chatbots.
  • The investigation aims to ensure the safety of children and improve regulations around AI technologies.
Story

In response to rising concerns about the risks associated with AI chatbots, the Federal Trade Commission (FTC) began an inquiry into social media and artificial intelligence companies regarding potential harms to children and teenagers. This investigation involves major players in the industry, including OpenAI, Meta Platforms, and Snap, among others. The FTC is seeking to understand what measures, if any, these companies are taking to assess the safety of their AI chatbots that are used as companions by young users. The inquiry follows tragic incidents related to the misuse of AI chatbots, including the suicide of a teenager who reportedly developed a harmful relationship with a chatbot. The lawsuits filed by grieving parents have drawn attention to the need for stronger safeguards and regulations in the AI chatbot space. The inquiry aims to evaluate the steps taken by these companies to limit the chatbots' use by minors and mitigate the risks involved in their engagement with AI technology. As children increasingly rely on AI chatbots for support and guidance, there is a critical need to ensure their safety. Recent reports indicate that these chatbots, including those developed by companies in the FTC's inquiry, have provided harmful advice regarding sensitive topics, such as mental health and substance abuse. As AI technologies continue to evolve rapidly, the FTC acknowledges the importance of understanding the implications this may have on younger users and aims to address the growing concerns surrounding AI's exposure to children and adolescents. The companies involved, like OpenAI and Meta, have expressed a commitment to improving safety measures. OpenAI is implementing new controls that allow parents to connect with their teens' accounts to monitor and manage their chatbot interactions. Similar steps are being taken by Meta in their chatbots, aimed at blocking discussions related to self-harm and offering alternative resources. However, the FTC's inquiry signals a broader examination of the industry's practices in safeguarding vulnerable users, underlining the call for greater accountability and oversight within the expanding landscape of AI technology used by minors.

Opinions

You've reached the end