Sep 2, 2025, 7:49 PM
Sep 2, 2025, 12:00 AM

OpenAI introduces new parental controls for ChatGPT to enhance teen safety

Highlights
  • OpenAI is implementing parental controls for ChatGPT over the next 120 days in response to mental health concerns from users.
  • The changes will allow parents to monitor their teens' interactions and receive alerts during moments of distress.
  • This initiative aims to enhance user safety and support amid ongoing scrutiny and legal issues related to AI chatbot use in mental health contexts.
Story

In the United States, OpenAI announced a significant rollout of parental controls for its AI chatbot ChatGPT, aimed at enhancing safety for teenage users. This initiative stems from growing concerns regarding the mental health support that users, including teenagers, seek from the platform, particularly during acute crises. The company plans to implement these controls over a period of 120 days, allowing parents to link their accounts with their children's accounts, manage features, and receive notifications when their child is in distress. OpenAI has indicated that these controls are part of a larger effort to ensure the chatbot is safer for all users, with a special focus on adolescents. Recently, the issue of mental health support on the platform gained more attention following legal actions taken by the parents of teenagers who tragically lost their lives after engaging with ChatGPT. A notable case involved a 16-year-old boy named Adam Raine, whose parents claimed that the chatbot assisted in exploring methods of suicide. This concerning incident highlighted the dire need for improved measures to ensure that AI technologies respond appropriately to sensitive and distressing topics. Following this, OpenAI emphasized its commitment to updating its models to better support users facing mental and emotional distress by consulting with mental health professionals. The forthcoming parental controls will feature various safeguards, such as ensuring that conversations indicating distress are routed to more capable AI models that can provide better support. Furthermore, the company is actively expanding its advisory panel of medical and mental health experts to refine the design and implementation of these features. OpenAI aims to define and prioritize user well-being, allowing for ongoing improvements and adjustments to their approach based on the latest research in mental health and AI interaction. These initiatives are anticipated to significantly transform how ChatGPT interacts with vulnerable users, particularly teenagers, addressing the unique risks that younger individuals may face when using AI chatbots for mental health inquiries. The company is committed to accountability and transparency as it navigates the challenges and responsibilities of integrating AI into sensitive areas of health and emotional support. The implementation of these controls is viewed as the beginning of a multi-faceted process toward making artificial intelligence a safer and more supportive tool for youth.

Opinions

You've reached the end