OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm

5 days ago 7
ChatGPT logoImage Credits:Silas Stein/picture confederation / Getty Images

1:20 PM PDT · May 7, 2026

On Thursday OpenAI announced a caller diagnostic called Trusted Contact, designed to alert a trusted third-party if ideations of self-harm are expressed wrong a conversation. The diagnostic allows an big ChatGPT idiosyncratic to designate different idiosyncratic arsenic a trusted interaction wrong their account, specified arsenic a person oregon household member. In cases wherever a speech whitethorn crook to self-harm, OpenAI volition present promote the idiosyncratic to scope retired to that contact. It besides sends an automated alert to the contact, encouraging them to cheque successful with the user.

OpenAI has faced a question of lawsuits from the families of radical who person committed termination aft talking with its chatbot. In a fig of cases, the families accidental ChatGPT encouraged their loved 1 to termination themselves—or adjacent helped them program it out.

OpenAI presently uses a operation of automation and quality reappraisal to grip perchance harmful incidents. Certain conversational triggers alert the company’s strategy to suicidal ideations, which past relay the accusation to a quality information team. The institution claims that each clip it receives this benignant of notification, the incidental is reviewed by a human. “We strive to reappraisal these information notifications in nether 1 hour,” the institution says.

If OpenAI’s interior squad decides that the concern represents a superior information risk, ChatGPT proceeds to nonstop the trusted interaction an alert—either by email, substance message, oregon an in-app notification. The alert is designed to beryllium little and to promote the interaction to cheque successful with the idiosyncratic successful question. It does not see elaborate accusation astir what was being discussed, arsenic a means of protecting the user’s privacy, the institution says.

Image Credits:OpenAI

The Trusted Contact diagnostic follows the safeguards the institution introduced past September that gave parents the powerfulness to person immoderate oversight of their teens’ accounts, including the reception of safety notifications designed to alert the genitor if OpenAI’s strategy believes their kid is facing a “serious information risk.” For immoderate clip now, ChatGPT has besides included automated alerts to question nonrecreational wellness services, should a speech inclination towards the taxable of aforesaid harm.

Crucially, Trust Contact is optional and, adjacent if the extortion is activated connected a peculiar account, immoderate idiosyncratic tin person aggregate ChatGPT accounts. OpenAI’s parental controls are besides optional, presenting a akin limitation.

“Trusted Contact is portion of OpenAI’s broader effort to physique AI systems that help radical during hard moments,” the institution wrote successful the announcement post. “We volition proceed to enactment with clinicians, researchers, and policymakers to amended however AI systems respond erstwhile radical whitethorn beryllium experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you acquisition done links successful our articles, we whitethorn gain a tiny commission. This doesn’t impact our editorial independence.

Lucas is simply a elder writer astatine TechCrunch, wherever helium covers artificial intelligence, user tech, and startups. He antecedently covered AI and cybersecurity astatine Gizmodo. You tin interaction Lucas by emailing lucas.ropek@techcrunch.com.

Read Entire Article