
Originally published byThe Verge
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a "Trusted Contact" will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot.
"Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference," OpenAI said in its announcement. "It offers another layer of support alongside the localized helplines already available …
🇺🇸
More news from United StatesUnited States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
20h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
20h ago
Why I’m Still Learning to Code Even With AI
22h ago
Students Boo Commencement Speaker After She Calls AI the 'Next Industrial Revolution'
5h ago

Testing for ‘Bad Cholesterol’ Doesn’t Tell the Whole Story
5h ago