Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Heads in the Concerning Direction
On the 14th of October, 2025, the chief executive of OpenAI made a surprising announcement.
“We designed ChatGPT fairly restrictive,” the statement said, “to make certain we were exercising caution concerning psychological well-being concerns.”
As a doctor specializing in psychiatry who studies emerging psychosis in teenagers and youth, this came as a surprise.
Experts have documented sixteen instances recently of users showing symptoms of psychosis – experiencing a break from reality – while using ChatGPT usage. My group has subsequently identified four further instances. Besides these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The plan, based on his declaration, is to loosen restrictions shortly. “We understand,” he states, that ChatGPT’s limitations “rendered it less useful/pleasurable to many users who had no existing conditions, but considering the gravity of the issue we sought to get this right. Now that we have succeeded in address the significant mental health issues and have advanced solutions, we are planning to securely ease the limitations in the majority of instances.”
“Emotional disorders,” assuming we adopt this framing, are unrelated to ChatGPT. They are attributed to users, who either possess them or not. Thankfully, these problems have now been “resolved,” though we are not provided details on the method (by “recent solutions” Altman probably means the semi-functional and simple to evade safety features that OpenAI recently introduced).
Yet the “mental health problems” Altman seeks to externalize have significant origins in the design of ChatGPT and additional advanced AI conversational agents. These systems encase an underlying statistical model in an user experience that mimics a discussion, and in doing so implicitly invite the user into the perception that they’re communicating with a entity that has autonomy. This false impression is powerful even if intellectually we might realize differently. Assigning intent is what people naturally do. We curse at our car or device. We wonder what our animal companion is feeling. We see ourselves in various contexts.
The popularity of these tools – 39% of US adults stated they used a chatbot in 2024, with over a quarter specifying ChatGPT by name – is, mostly, based on the strength of this deception. Chatbots are constantly accessible assistants that can, as per OpenAI’s website states, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be attributed “individual qualities”. They can address us personally. They have friendly names of their own (the first of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, saddled with the title it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the primary issue. Those talking about ChatGPT commonly reference its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that generated a similar effect. By modern standards Eliza was primitive: it created answers via basic rules, often rephrasing input as a query or making vague statements. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how many users gave the impression Eliza, in a way, comprehended their feelings. But what current chatbots produce is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The large language models at the center of ChatGPT and other contemporary chatbots can convincingly generate natural language only because they have been fed extremely vast volumes of raw text: books, social media posts, recorded footage; the broader the superior. Certainly this learning material contains accurate information. But it also unavoidably contains fabricated content, partial truths and false beliefs. When a user provides ChatGPT a query, the base algorithm processes it as part of a “context” that includes the user’s recent messages and its own responses, merging it with what’s embedded in its training data to generate a mathematically probable answer. This is intensification, not echoing. If the user is incorrect in a certain manner, the model has no way of understanding that. It restates the inaccurate belief, maybe even more convincingly or eloquently. Perhaps includes extra information. This can lead someone into delusion.
Who is vulnerable here? The more relevant inquiry is, who is immune? Every person, regardless of whether we “experience” current “emotional disorders”, are able to and often develop incorrect conceptions of ourselves or the reality. The ongoing friction of dialogues with other people is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a confidant. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we express is enthusiastically validated.
OpenAI has acknowledged this in the same way Altman has admitted “emotional concerns”: by placing it outside, categorizing it, and announcing it is fixed. In spring, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychosis have kept occurring, and Altman has been backtracking on this claim. In late summer he asserted that numerous individuals enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his most recent announcement, he commented that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company