Ireland Threatens Facebook and TikTok with Fines Over Harmful Content Regulation
- The Online Safety Code, effective next month, mandates social media platforms to protect users from harmful content, particularly for children.
- Platforms like Facebook, TikTok, and YouTube must implement measures against cyber-bullying and violent content, with fines for violations potentially reaching €20 million.
- This new regulatory approach marks the end of self-regulation for social media companies in Ireland, emphasizing accountability and user protection.
Ireland is set to implement the Online Safety Code next month, aiming to regulate social media platforms more stringently and address the escalation of harmful content online. This formalized code was developed in conjunction with the European Commission and is intended to apply to several major platforms such as Facebook, YouTube, and TikTok that operate within the EU from their headquarters in Ireland. The code mandates that these platforms take proactive measures to protect users, especially children, from violent content, cyber-bullying, and child sexual abuse. Key requirements also include age verification to prevent minors from accessing inappropriate material. Furthermore, companies that fail to comply with these regulations could face substantial penalties, amounting to €20 million or 10% of their annual turnover—whichever figure is higher. Although companies are expected to implement these changes immediately, they are granted a nine-month period to update their IT systems. Ireland's Online Safety Commissioner, Niamh Hodnett, emphasized the urgency of this initiative, suggesting that changes in platform behavior are necessary to combat the perception that social media is akin to a 'Wild West' with insufficient oversight. The new statutory regime represents a shift in how online platforms are regulated, moving away from self-regulation towards a more accountable framework. Overall, this initiative highlights Ireland's commitment to creating a safer online environment while holding platforms accountable for their content moderation practices and ensuring they actively protect their users.