techcrunch.com
Meta is pausing teens’ access to its AI characters across its apps globally while it develops an updated version with stronger safety measures. The decision follows a lawsuit alleging Meta failed to protect children from sexual exploitation and recent reports of the company limiting discovery regarding social media’s impact on teen mental health.
Meta stated the pause responds to parent feedback requesting more control. The updated AI will feature built-in parental controls and provide age-appropriate responses restricted to safe topics like education and sports. This aligns with Meta’s recent rollout of parental supervision tools on Instagram. The move comes amid broader industry scrutiny; recently, AI startup Character.AI restricted open-ended conversations for minors, and OpenAI added teen safety rules for ChatGPT.
Read More
