OpenAI Introduces New ‘Trusted Contact’ Safeguard for Cases of Possible Self-Harm

Published: 2026-05-07

Summary

OpenAI launched a Trusted Contact feature for adult ChatGPT users. If automated systems and trained reviewers determine a user is discussing self-harm, a pre-designated trusted contact may be notified. The feature is designed alongside existing localized helplines and builds on prior parental-controls safety notifications for teens. OpenAI disclosed that 0.07% of weekly users show signs of psychosis/mania emergencies, 0.15% express self-harm risk, and 0.15% show emotional reliance on AI—potentially affecting millions given ~10% of world population uses ChatGPT weekly. Notifications include guidance for sensitive conversations. The feature was developed with input from clinicians and safety experts.

Key Data Points

  • Feature: Trusted Contact alerts for adult users
  • Trigger: Automated systems + human reviewers flag self-harm risk
  • Prevalence stats (weekly users):
    • 0.07% mental health emergencies (psychosis/mania)
    • 0.15% self-harm/suicide risk
    • 0.15% emotional reliance on AI
  • Scale: ~10% of world population uses ChatGPT weekly
  • Safeguards: Includes localized helplines, expert guidance links, human review
  • Input: Clinicians and safety experts consulted

Enrichment Snippets

  • TechCrunch: “The Trusted Contact feature follows the safeguards … that gave parents the power to have some oversight of their teens’ accounts.”
  • Gizmodo: “Considering the company claims that roughly 10% of the world’s population uses ChatGPT weekly, that could amount to nearly three million people.”
  • OpenAI: “Trusted Contact is designed to offer another layer of support alongside the localized helplines already available in ChatGPT.”

Relevance

  • Impact: HIGH — Safety/regulation incident at global scale; affects product design norms for AI mental-health interventions and user privacy.