AI Psychosis Poses a Growing Risk, While ChatGPT Heads in the Wrong Direction

Back on October 14, 2025, the chief executive of OpenAI issued a remarkable declaration.

“We developed ChatGPT rather limited,” it was stated, “to make certain we were acting responsibly concerning psychological well-being matters.”

As a psychiatrist who researches emerging psychotic disorders in adolescents and emerging adults, this came as a surprise.

Scientists have identified sixteen instances in the current year of people showing symptoms of psychosis – experiencing a break from reality – while using ChatGPT usage. Our unit has afterward discovered an additional four cases. In addition to these is the publicly known case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The strategy, as per his statement, is to reduce caution in the near future. “We understand,” he continues, that ChatGPT’s limitations “rendered it less useful/pleasurable to numerous users who had no existing conditions, but given the severity of the issue we sought to handle it correctly. Now that we have succeeded in mitigate the severe mental health issues and have new tools, we are preparing to responsibly relax the limitations in many situations.”

“Mental health problems,” if we accept this framing, are unrelated to ChatGPT. They are associated with people, who either possess them or not. Luckily, these issues have now been “resolved,” even if we are not informed the method (by “new tools” Altman presumably means the semi-functional and readily bypassed parental controls that OpenAI has just launched).

But the “mental health problems” Altman aims to attribute externally have strong foundations in the design of ChatGPT and additional advanced AI conversational agents. These products surround an underlying data-driven engine in an user experience that mimics a discussion, and in this approach implicitly invite the user into the illusion that they’re engaging with a presence that has independent action. This deception is compelling even if cognitively we might know otherwise. Attributing agency is what humans are wired to do. We yell at our car or computer. We wonder what our pet is considering. We see ourselves in various contexts.

The success of these systems – nearly four in ten U.S. residents reported using a conversational AI in 2024, with over a quarter reporting ChatGPT specifically – is, mostly, based on the strength of this illusion. Chatbots are constantly accessible companions that can, according to OpenAI’s official site states, “think creatively,” “explore ideas” and “partner” with us. They can be assigned “personality traits”. They can use our names. They have accessible names of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the core concern. Those talking about ChatGPT commonly invoke its early forerunner, the Eliza “therapist” chatbot designed in 1967 that created a similar perception. By contemporary measures Eliza was primitive: it created answers via simple heuristics, frequently paraphrasing questions as a inquiry or making general observations. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, in a way, understood them. But what current chatbots create is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.

The large language models at the heart of ChatGPT and other contemporary chatbots can realistically create natural language only because they have been fed immensely huge quantities of unprocessed data: publications, social media posts, transcribed video; the broader the more effective. Undoubtedly this educational input contains facts. But it also inevitably involves made-up stories, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a query, the underlying model analyzes it as part of a “context” that includes the user’s recent messages and its prior replies, integrating it with what’s embedded in its knowledge base to produce a mathematically probable response. This is intensification, not echoing. If the user is mistaken in a certain manner, the model has no way of recognizing that. It restates the false idea, perhaps even more convincingly or fluently. Maybe adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The more relevant inquiry is, who isn’t? All of us, irrespective of whether we “have” preexisting “mental health problems”, can and do develop incorrect beliefs of who we are or the reality. The ongoing interaction of dialogues with other people is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a confidant. A conversation with it is not a conversation at all, but a feedback loop in which a large portion of what we say is readily supported.

OpenAI has recognized this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, assigning it a term, and declaring it solved. In spring, the company clarified that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychotic episodes have continued, and Altman has been retreating from this position. In the summer month of August he claimed that many users appreciated ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his recent announcement, he commented that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company

Sarah Smith
Sarah Smith

A seasoned life coach and writer passionate about empowering individuals to unlock their potential and thrive in all aspects of life.