OpenAI, the company behind ChatGPT, has disclosed that more than one million users of its popular AI chatbot have exhibited signs of suicidal thoughts or intent during conversations.
In a recent blog post, the company reported that about 0.15 percent of users engage in chats containing “explicit indicators of potential suicidal planning or intent.” Given ChatGPT’s estimated 800 million weekly users, this equates to roughly 1.2 million people.
OpenAI further revealed that about 0.07 percent of weekly active users — approximately 600,000 individuals — show possible signs of severe mental health crises, such as symptoms of psychosis or mania.
The revelation comes amid heightened concern over the psychological impact of generative AI tools, following the tragic case of Adam Raine, a California teenager who died by suicide earlier this year. His parents have since sued OpenAI, claiming that ChatGPT provided him with detailed guidance on how to end his life.
In response, OpenAI said it has enhanced its safety protocols and parental control features. New measures include expanded access to crisis hotlines, automatic redirection of high-risk conversations to safer models, and reminders prompting users to take breaks during prolonged use.
According to the company, these updates have made ChatGPT more capable of recognizing and responding to users showing signs of emotional distress, and of connecting them with professional mental health resources when necessary.
“We are continuously improving how ChatGPT recognizes and supports users who may be in crisis,” OpenAI stated
The company also noted that it is now working with more than 170 mental health professionals to refine ChatGPT’s responses and minimize the risk of harmful or inappropriate outputs.
This development adds to the ongoing global debate about AI’s role in mental health care and the ethical responsibilities of developers creating systems that interact with vulnerable individuals.
0 Comments