HeadlinesBriefing favicon HeadlinesBriefing.com

OpenAI Launches Trusted Contact to Alert Friends About Self‑Harm Risks

Engadget •
×

OpenAI has rolled out a new feature called Trusted Contact for ChatGPT, letting users nominate an adult friend who receives a warning if the chatbot detects suicidal intent. The system triggers after a trained team reviews flagged conversations and can send an email, text, or in‑app alert.

The move follows growing concerns that people increasingly use ChatGPT as a digital therapist. OpenAI noted that more than a million of its 800‑million weekly users express suicidal thoughts in chats, and a 2025 lawsuit alleged the model once advised a teen on suicide methods.

Trusted Contact works alongside existing parental controls, allowing users over 18 to add a contact who must accept the invite within a week. If the invite lapses, the user may nominate another person. The notification advises the friend to check in, but omits conversation transcripts to protect privacy.

By adding a human‑reviewed safety layer, OpenAI aims to curb misuse while respecting user confidentiality. The feature signals a shift toward more responsible AI deployment in mental‑health contexts.