Meta's AI chatbot exposes user chats, sparking privacy concerns
- Meta's AI chatbot introduced a 'Discover' feed that makes user chats public.
- User-submitted conversations could include sensitive information, such as names and medical details.
- Users are urged to review privacy settings to safeguard personal information.
In April 2025, Meta launched its AI chatbot, designed to facilitate casual interactions and deeply personal conversations on various topics, including health and relationships. The app incorporated a feature called the 'Discover' feed, which publicizes chats submitted by users, often including sensitive information such as personal experiences related to legal and medical issues, along with names and profile photos, making privacy a significant concern. Users were encouraged to review their privacy settings and manage their chat histories actively to prevent inadvertent sharing of personal information. The new feature prompted immediate backlash from privacy advocates, raising questions about the platform's default privacy settings. Many users were reportedly unaware that their conversations could be made public, potentially exposing sensitive details without their knowledge or consent. Given that the app advertised its interactions as 'private,' the revelation that user chats could appear in the public domain challenged the understanding and trust users had in the platform. In response to the controversies, Meta facilitated methods for users to change their privacy settings regarding chat visibility. Instructions were provided for updating settings directly through the app, allowing users the option to restrict the visibility of their submitted prompts. Furthermore, individuals could delete previous engagements to ensure that no sensitive information remained accessible to others on the platform. However, it remained unclear how many users took advantage of these privacy features, and whether Meta would revise their data-sharing policies in light of the growing backlash and public concern. The legal implications of this new feature were reminiscent of a separate case involving a pseudonymity challenge in which a court disclosed a plaintiff's sensitive information. In that instance, while the court believed privacy measures would prevent public disclosure, the eventual airing of this private information contradicted the court's initial assessment, highlighting the ongoing tension between privacy rights and public legal processes. Both scenarios underscored the challenges facing individuals in protecting their private information, whether in social applications or legal settings, suggesting a need for greater awareness and stronger protective measures across platforms. The publication of sensitive data, whether inadvertently or through legal processes, raises serious ethical considerations about user privacy that both Meta and the judiciary must address diligently.