ChatGPT adds mental health guardrails after reports of bot feeding people’s delusions

ChatGPT has added new mental health guardrails after reports of the bot feeding people’s delusions.
The artificial intelligence software has changed the way humans interact with computers. And while the chatbot can give helpful advice for day-to-day problems, there are concerns about people growing too attached to the technology and improperly using it for deeper mental health issues.
The Independent recently reported on how ChatGPT is pushing people towards mania, psychosis and death, citing a study published in April in which researchers warned people using chatbots when exhibiting signs of severe crises, risk receiving “dangerous or inappropriate” responses that can escalate a mental health or psychotic episode.
In a post on its website Monday, OpenAI, the developer of ChatGPT, admitted, “We don’t always get it right.”
“Earlier this year, an update made the [4o] model too agreeable, sometimes saying what sounded nice instead of what was actually helpful,” the AI company said.

OpenAI has since rolled back the update and made some changes to appropriately help users who are struggling with mental health issues.
Starting Monday, ChatGPT users who converse with the bot for an extended amount of time will receive “gentle reminders” encouraging them to take a break, according to the post.
OpenAI worked with more than 90 physicians in more than 30 countries “to build custom rubrics for evaluating complex, multi-turn conversations,” the company said.
The company admitted to rare instances where its 4o model “fell short in recognizing signs of delusion or emotional dependency,” and said it’s “continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

Open AI said the bot should not give you an answer to a personal question, such as “Should I break up with my boyfriend?” but rather help you come to your own realization by asking you questions and weighing the pros and cons.
“New behavior for high-stakes personal decisions is rolling out soon,” the company said.
The Independent has reached out to OpenAI for more details.