AI Psychosis Represents a Growing Danger, While ChatGPT Heads in the Concerning Direction
Back on the 14th of October, 2025, the head of OpenAI made a extraordinary statement.
“We designed ChatGPT quite controlled,” it was stated, “to ensure we were acting responsibly with respect to psychological well-being concerns.”
Being a doctor specializing in psychiatry who studies newly developing psychosis in teenagers and emerging adults, this was news to me.
Experts have found 16 cases recently of individuals developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT use. My group has afterward identified four more examples. In addition to these is the now well-known case of a teenager who took his own life after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.
The plan, based on his statement, is to loosen restrictions in the near future. “We realize,” he continues, that ChatGPT’s controls “made it less useful/enjoyable to a large number of people who had no psychological issues, but due to the seriousness of the issue we aimed to get this right. Given that we have managed to address the severe mental health issues and have advanced solutions, we are planning to safely ease the controls in most cases.”
“Psychological issues,” if we accept this viewpoint, are separate from ChatGPT. They are attributed to people, who may or may not have them. Luckily, these problems have now been “resolved,” although we are not informed the means (by “new tools” Altman likely indicates the semi-functional and simple to evade safety features that OpenAI has just launched).
However the “emotional health issues” Altman aims to attribute externally have significant origins in the design of ChatGPT and additional large language model conversational agents. These tools surround an fundamental algorithmic system in an user experience that replicates a dialogue, and in this approach subtly encourage the user into the belief that they’re engaging with a being that has agency. This false impression is powerful even if cognitively we might know differently. Imputing consciousness is what people naturally do. We yell at our car or computer. We ponder what our domestic animal is considering. We recognize our behaviors in many things.
The success of these tools – 39% of US adults stated they used a chatbot in 2024, with more than one in four mentioning ChatGPT specifically – is, mostly, predicated on the strength of this perception. Chatbots are constantly accessible assistants that can, as per OpenAI’s website tells us, “brainstorm,” “explore ideas” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly titles of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, stuck with the designation it had when it became popular, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those talking about ChatGPT often reference its distant ancestor, the Eliza “therapist” chatbot designed in 1967 that produced a comparable illusion. By modern standards Eliza was primitive: it produced replies via straightforward methods, frequently paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, in a way, grasped their emotions. But what modern chatbots produce is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the center of ChatGPT and additional current chatbots can realistically create natural language only because they have been fed almost inconceivably large volumes of raw text: literature, digital communications, audio conversions; the more extensive the superior. Certainly this educational input contains accurate information. But it also necessarily contains fiction, incomplete facts and false beliefs. When a user provides ChatGPT a query, the underlying model reviews it as part of a “context” that encompasses the user’s previous interactions and its prior replies, combining it with what’s stored in its knowledge base to create a statistically “likely” reply. This is amplification, not reflection. If the user is incorrect in a certain manner, the model has no means of recognizing that. It restates the misconception, possibly even more effectively or articulately. Perhaps includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who isn’t? Each individual, without considering whether we “possess” current “psychological conditions”, can and do create mistaken beliefs of ourselves or the world. The ongoing friction of dialogues with individuals around us is what keeps us oriented to common perception. ChatGPT is not a human. It is not a companion. A conversation with it is not a conversation at all, but a echo chamber in which a large portion of what we express is enthusiastically reinforced.
OpenAI has recognized this in the similar fashion Altman has admitted “mental health problems”: by placing it outside, assigning it a term, and declaring it solved. In spring, the firm clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have continued, and Altman has been backtracking on this claim. In August he claimed that numerous individuals appreciated ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his recent statement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company