OpenAI Says Detects 1.2 Million ChatGPT Users Displaying Suicidal Intent in Conversations

ChatGPT now includes enhanced crisis recognition tools, developed with input from more than 170 mental health specialists.

You may also like

ChatGPT now includes enhanced crisis recognition tools, developed with input from more than 170 mental health specialists.

Data from ChatGPT–maker OpenAI suggests that more than a million people using its generative AI chatbot have shown interest in suicide.

In a blog post published on Monday, October 27th, the AI company estimated that approximately 0.15% of users have “conversations that include explicit indicators of potential suicidal planning or intent.” With OpenAI reporting more than 800 million people use ChatGPT every week, this translates to about 1.2 million people.

The company also estimates that around 0.07% of active weekly users show possible signs of mental health emergencies related to psychosis or mania–meaning slightly fewer than 600,000 people.

The issue came to the fore after California teenager Adam Raine died by suicide earlier this year. His parents filed a lawsuit claiming ChatGPT provided him with specific advice on how to kill himself.

This has prompted OpenAI to increase ChatGPT’s parental control options and introduce additional safeguards.

Lukács Fux is currently a law student at Pázmány Péter Catholic University in Budapest. He served as an intern during the Hungarian Council Presidency and completed a separate internship in the European Parliament.

Leave a Reply

Our community starts with you

Subscribe to any plan available in our store to comment, connect and be part of the conversation!