OpenAI has shared that around 0.07 per cent of ChatGPT users and 0.01 per cent of messages show possible signs of mental health emergencies related to psychosis or mania in any given week.
With 800 million weekly active users, this means that as many as 560,000 people could be displaying signs of mental health crises each week.
The figures come after an article published last month by the Journal of Mental Health & Clinical Psychology explained that while generative AI tools like ChatGPT and Character.AI provide unprecedented access to information and companionship, a growing body of evidence suggests they may also induce or exacerbate psychiatric symptoms, particularly in vulnerable individuals.
In a blog post, OpenAI said it is working on strengthening the large language model’s responses in sensitive conversations.
It added that is has been working with over 170 mental health experts to help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support.
The company is focusing on safety improvements in three key areas: mental health concerns such as psychosis or mania, self-harm and suicide, and emotional resilience on AI.
These areas have been added to its standard set of baseline safety testing and will apply to any future releases of the model.
Under the new changes, OpenAI said the model should support and respect users’ real-world relationships, avoid affirming ungrounded beliefs that potentially relate to mental or emotional distress, respond safely and empathetically to potential signs of delusion or mania, and pay closer attention to indirect signals of potential self-harm or suicide risk.
The company claims that the mental health conversations that trigger safety concerns, like psychosis, mania, or suicidal thinking, are “extremely rare,” adding that it believes ChatGPT can provide a supportive space for people to process their feelings and encouragement to reach out to mental health professionals.
OpenAI said it has built a global physician network, a broad pool of nearly 300 physicians and psychologists who have practiced in 60 countries, that it uses to inform its safety research and represent global views.
As part of this work, psychiatrists and psychologists reviewed more than 1,800 model responses involving serious mental health situations and compared responses from the new GPT 5 chat model to previous models.
OpenAI said these experts found that the new model was substantially improved compared to GPT 4o, with a 39-52 per cent decrease in undesired responses across all categories.


