Meta announced the new notification features for parents as its CEO and Instagram chief face questioning in two simultaneous trials over alleged harms to children.
Instagram will begin sending alerts to parents when their teenage children repeatedly search for content related to suicide or self-harm, Meta announced Thursday, in the platform’s most direct response yet to years of pressure over its impact on young users’ mental health. Parents will not need to take any action beyond being enrolled in supervision for the feature to activate.
Instagram will start notifying parents if their teen repeatedly tries to search for terms related to suicide or self-harm within a short period of time. Attempted searches that would prompt an alert include phrases promoting suicide or self-harm, phrases that suggest a teen wants to harm themselves, and terms such as “suicide” or “self-harm.”
The alerts will be sent to parents via email, text, or WhatsApp, depending on the contact information available, as well as through an in-app notification. Tapping on the notification will open a full-screen message explaining that their teen has repeatedly tried to search Instagram for terms associated with suicide or self-harm within a short period of time. Parents will also have the option to view expert resources designed to help them approach potentially sensitive conversations with their teen.
According to the company:
Our goal is to empower parents to step in if their teen’s searches suggest they may need support. We also want to avoid sending these notifications unnecessarily, which, if done too much, could make the notifications less useful overall.
We work to block searches for terms clearly associated with suicide and self-harm, including terms that violate our suicide and self-harm policies. This means we don’t show any results and instead direct people to resources and local organizations that can help.
Meta said it is building similar parental alerts for its AI experiences.
“These will notify parents if a teen attempts to engage in certain types of conversations related to suicide or self-harm with our AI,” the company said in a blog. “This is important work and we’ll have more to share in the coming months.”
Instagram already restricts the visibility of self harm content and redirects teens toward support resources when such material is detected. The new alert system adds a layer of oversight by involving parents directly when warning signals appear, rather than relying solely on content moderation and automated prompts.
Meta is not sharing exactly how many searches it takes, or how quickly they must occur, before a notification is triggered. Both parents and teens will be informed when the feature goes live on their account.
The alerts are launching in the coming weeks to parents who are enrolled in Instagram’s parental supervision program in the United States, Canada, the United Kingdom, and Australia, with more countries to follow later this year.