Instagram Alerts Parents on Teens’ Harmful Searches

person using smartphone

Summary

Instagram will alert participating parents if teens search for self-harm themes.

Why this matters

This update emphasizes Instagram's focus on safety for young users and involves parents in monitoring risky online behavior.

Instagram will notify parents when their children search repeatedly for terms linked to suicide or self-harm. This alert system applies to parents participating in Instagram’s parental supervision program. Instagram already blocks such content from appearing in teen accounts’ search results and provides helpline resources.

The introduction of this feature comes amid ongoing legal proceedings involving Instagram’s parent company, Meta Platforms. Trials in Los Angeles and New Mexico are examining allegations that Meta’s platforms intentionally are addictive to minors and inadequate in protecting them from harmful content, including potential exploitation.

Meta executives, including CEO Mark Zuckerberg, deny these claims, suggesting that current scientific research does not conclusively show that social media results in mental health issues.

Notifications about these searches will be sent using the parent’s preferred contact method, whether through email, text, WhatsApp, or a notification in their Instagram account.

Meta stated that their goal is to enable parental intervention effectively when teens’ online searches indicate they may require support. Plans are also in development to alert parents about their children’s interactions with artificial intelligence in a similar context.

Meta has committed to expanding this initiative, explaining that future updates will include notifications if a teen engages AI in conversations about sensitive topics like suicide or self-harm.

Get Camp Lejeune & New River Updates

Essential base alerts, local events, and military news delivered to your inbox

We don’t spam! Read our privacy policy for more info.