AI Psychosis Poses a Growing Risk, While ChatGPT Heads in the Wrong Path

On the 14th of October, 2025, the CEO of OpenAI issued a surprising announcement.

“We developed ChatGPT fairly restrictive,” the announcement noted, “to make certain we were exercising caution regarding mental health matters.”

Working as a mental health specialist who investigates newly developing psychotic disorders in teenagers and emerging adults, this came as a surprise.

Researchers have documented 16 cases in the current year of users developing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT interaction. Our research team has since discovered four more examples. Alongside these is the widely reported case of a teenager who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The intention, based on his announcement, is to loosen restrictions in the near future. “We recognize,” he continues, that ChatGPT’s controls “rendered it less useful/enjoyable to a large number of people who had no existing conditions, but given the gravity of the issue we wanted to handle it correctly. Given that we have been able to mitigate the severe mental health issues and have advanced solutions, we are going to be able to responsibly relax the limitations in many situations.”

“Psychological issues,” if we accept this framing, are independent of ChatGPT. They are attributed to users, who may or may not have them. Fortunately, these concerns have now been “resolved,” though we are not provided details on the means (by “recent solutions” Altman presumably indicates the semi-functional and readily bypassed parental controls that OpenAI recently introduced).

But the “psychological disorders” Altman seeks to externalize have deep roots in the design of ChatGPT and additional sophisticated chatbot AI assistants. These products surround an basic algorithmic system in an interaction design that replicates a conversation, and in this process implicitly invite the user into the perception that they’re interacting with a being that has agency. This deception is powerful even if cognitively we might realize differently. Assigning intent is what individuals are inclined to perform. We get angry with our vehicle or device. We ponder what our pet is considering. We recognize our behaviors everywhere.

The success of these tools – over a third of American adults indicated they interacted with a conversational AI in 2024, with more than one in four specifying ChatGPT by name – is, primarily, predicated on the influence of this deception. Chatbots are always-available assistants that can, as per OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be assigned “personality traits”. They can address us personally. They have friendly identities of their own (the initial of these products, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the designation it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those analyzing ChatGPT often invoke its early forerunner, the Eliza “therapist” chatbot created in 1967 that produced a similar illusion. By contemporary measures Eliza was rudimentary: it created answers via basic rules, typically rephrasing input as a query or making vague statements. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how a large number of people seemed to feel Eliza, in some sense, grasped their emotions. But what modern chatbots create is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the core of ChatGPT and other current chatbots can effectively produce fluent dialogue only because they have been fed almost inconceivably large quantities of written content: publications, digital communications, transcribed video; the broader the superior. Undoubtedly this educational input incorporates truths. But it also inevitably contains fabricated content, half-truths and false beliefs. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “context” that includes the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its knowledge base to produce a mathematically probable answer. This is amplification, not echoing. If the user is wrong in some way, the model has no means of recognizing that. It restates the inaccurate belief, perhaps even more persuasively or eloquently. Perhaps includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “have” current “emotional disorders”, may and frequently develop erroneous beliefs of who we are or the world. The constant exchange of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a confidant. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we express is readily supported.

OpenAI has admitted this in the same way Altman has admitted “emotional concerns”: by externalizing it, assigning it a term, and stating it is resolved. In spring, the organization explained that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychosis have kept occurring, and Altman has been walking even this back. In August he asserted that numerous individuals liked ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his recent statement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company

Anthony Wong
Anthony Wong

A passionate storyteller and script consultant with over a decade of experience in film and theater, dedicated to helping writers find their unique voice.