February 27, 2026 Instagram will begin notifying parents if their teen repeatedly searches for suicide or self-harm-related terms within a short period, the company announced Thursday. The feature, launching next week in Canada, the U.S., U.K. and Australia, expands Instagram’s parental supervision tools as Meta and other tech giants continue to face legal challenges over teen safety.
The alerts apply to parents enrolled in Instagram’s supervision programme. While the platform already blocks users from viewing suicide and self-harm content in search, Meta says the new system is designed to inform parents if a teen repeatedly attempts to look up related terms, enabling earlier intervention. Notifications will be sent via email, text message or WhatsApp, based on the contact details provided, and will also appear in the Instagram app. Each alert will include resources to help guide conversations between parents and teens.
Searches that may trigger an alert include phrases that encourage suicide or self-harm, terms indicating a teen may be at risk and keywords such as “suicide” or “self-harm.” The company said it set a threshold requiring multiple searches within a short timeframe in order to reduce unnecessary notifications.
The announcement comes as Meta and other social media companies face multiple lawsuits alleging harm to teen users. In testimony this week in U.S. District Court for the Northern District of California, Instagram head Adam Mosseri was questioned about the timing of certain safety feature rollouts, including a nudity filter for teens’ private messages. In separate proceedings before the Los Angeles County Superior Court, internal Meta research presented in court found that parental supervision tools had limited impact on reducing compulsive social media use among children, and that those experiencing stressful life events were more likely to struggle with regulating their usage.
Instagram said it plans to expand the new alert system to additional regions later this year. The company also intends to introduce similar notifications in the future when a teen attempts to engage the app’s AI in conversations about suicide or self-harm.
