Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Concerning Direction
On October 14, 2025, the chief executive of OpenAI delivered a surprising declaration.
“We developed ChatGPT fairly controlled,” it was stated, “to ensure we were exercising caution with respect to mental health concerns.”
Being a mental health specialist who researches emerging psychosis in young people and young adults, this was an unexpected revelation.
Researchers have documented 16 cases this year of individuals developing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT usage. Our unit has afterward discovered four more examples. In addition to these is the publicly known case of a teenager who took his own life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.
The intention, as per his declaration, is to be less careful in the near future. “We realize,” he adds, that ChatGPT’s restrictions “caused it to be less useful/engaging to many users who had no existing conditions, but due to the severity of the issue we aimed to address it properly. Given that we have succeeded in reduce the severe mental health issues and have new tools, we are going to be able to safely ease the controls in many situations.”
“Mental health problems,” assuming we adopt this viewpoint, are separate from ChatGPT. They are associated with users, who either possess them or not. Thankfully, these issues have now been “resolved,” even if we are not informed the means (by “updated instruments” Altman presumably indicates the imperfect and readily bypassed parental controls that OpenAI has just launched).
Yet the “mental health problems” Altman aims to attribute externally have strong foundations in the architecture of ChatGPT and similar sophisticated chatbot conversational agents. These systems wrap an basic data-driven engine in an user experience that replicates a discussion, and in doing so indirectly prompt the user into the illusion that they’re engaging with a being that has autonomy. This illusion is strong even if rationally we might understand the truth. Attributing agency is what people naturally do. We yell at our car or computer. We ponder what our pet is feeling. We see ourselves everywhere.
The widespread adoption of these systems – 39% of US adults reported using a chatbot in 2024, with over a quarter specifying ChatGPT in particular – is, primarily, based on the power of this illusion. Chatbots are constantly accessible assistants that can, as OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “partner” with us. They can be given “individual qualities”. They can call us by name. They have accessible identities of their own (the initial of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s advertising team, burdened by the designation it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the main problem. Those analyzing ChatGPT commonly mention its historical predecessor, the Eliza “counselor” chatbot created in 1967 that produced a similar effect. By modern standards Eliza was basic: it generated responses via basic rules, typically rephrasing input as a question or making generic comments. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and worried – by how a large number of people gave the impression Eliza, in some sense, grasped their emotions. But what current chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and similar contemporary chatbots can realistically create fluent dialogue only because they have been supplied with almost inconceivably large amounts of raw text: publications, social media posts, recorded footage; the broader the more effective. Undoubtedly this training data includes facts. But it also necessarily includes made-up stories, partial truths and misconceptions. When a user provides ChatGPT a prompt, the core system analyzes it as part of a “setting” that includes the user’s previous interactions and its own responses, combining it with what’s encoded in its training data to produce a mathematically probable answer. This is amplification, not mirroring. If the user is wrong in any respect, the model has no method of comprehending that. It repeats the false idea, maybe even more convincingly or eloquently. Maybe provides further specifics. This can lead someone into delusion.
Who is vulnerable here? The more important point is, who remains unaffected? All of us, regardless of whether we “experience” current “emotional disorders”, can and do develop erroneous beliefs of ourselves or the world. The continuous friction of dialogues with other people is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a confidant. A interaction with it is not truly a discussion, but a echo chamber in which much of what we communicate is cheerfully supported.
OpenAI has recognized this in the similar fashion Altman has recognized “psychological issues”: by externalizing it, categorizing it, and announcing it is fixed. In the month of April, the company stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In August he claimed that numerous individuals liked ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he noted that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company