Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Wrong Direction

Back on October 14, 2025, the head of OpenAI delivered a remarkable statement.

“We developed ChatGPT quite controlled,” the announcement noted, “to make certain we were exercising caution regarding mental health issues.”

Working as a doctor specializing in psychiatry who investigates recently appearing psychosis in adolescents and young adults, this came as a surprise.

Researchers have found sixteen instances this year of people developing symptoms of psychosis – becoming detached from the real world – while using ChatGPT interaction. Our research team has subsequently discovered four more examples. Besides these is the publicly known case of a teenager who took his own life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The intention, as per his announcement, is to loosen restrictions in the near future. “We recognize,” he states, that ChatGPT’s limitations “made it less beneficial/pleasurable to many users who had no mental health problems, but considering the gravity of the issue we aimed to address it properly. Now that we have succeeded in mitigate the serious mental health issues and have new tools, we are preparing to responsibly ease the limitations in most cases.”

“Psychological issues,” assuming we adopt this perspective, are separate from ChatGPT. They are associated with individuals, who either possess them or not. Thankfully, these concerns have now been “mitigated,” even if we are not told the method (by “updated instruments” Altman presumably refers to the imperfect and easily circumvented guardian restrictions that OpenAI has just launched).

But the “psychological disorders” Altman wants to externalize have deep roots in the structure of ChatGPT and other sophisticated chatbot conversational agents. These systems surround an fundamental algorithmic system in an interface that simulates a conversation, and in this approach subtly encourage the user into the perception that they’re interacting with a presence that has agency. This illusion is powerful even if rationally we might know the truth. Attributing agency is what individuals are inclined to perform. We get angry with our vehicle or computer. We ponder what our animal companion is feeling. We recognize our behaviors in various contexts.

The widespread adoption of these systems – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with 28% specifying ChatGPT by name – is, mostly, predicated on the influence of this illusion. Chatbots are constantly accessible assistants that can, as per OpenAI’s online platform informs us, “brainstorm,” “consider possibilities” and “work together” with us. They can be assigned “individual qualities”. They can call us by name. They have accessible titles of their own (the initial of these systems, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, saddled with the designation it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the core concern. Those discussing ChatGPT often reference its distant ancestor, the Eliza “counselor” chatbot created in 1967 that produced a comparable illusion. By modern standards Eliza was primitive: it created answers via basic rules, typically rephrasing input as a query or making vague statements. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals seemed to feel Eliza, in a way, grasped their emotions. But what current chatbots create is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.

The sophisticated algorithms at the heart of ChatGPT and other current chatbots can effectively produce natural language only because they have been trained on immensely huge quantities of raw text: literature, digital communications, audio conversions; the broader the better. Certainly this training data incorporates truths. But it also unavoidably includes fabricated content, incomplete facts and inaccurate ideas. When a user provides ChatGPT a prompt, the core system reviews it as part of a “background” that encompasses the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its training data to produce a mathematically probable reply. This is magnification, not echoing. If the user is mistaken in any respect, the model has no method of understanding that. It restates the misconception, maybe even more persuasively or articulately. It might provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who isn’t? Each individual, regardless of whether we “possess” existing “psychological conditions”, can and do create erroneous ideas of ourselves or the world. The continuous exchange of conversations with other people is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a friend. A interaction with it is not a conversation at all, but a feedback loop in which a large portion of what we say is enthusiastically reinforced.

OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and stating it is resolved. In spring, the firm stated that it was “dealing with” ChatGPT’s “sycophancy”. But reports of psychosis have persisted, and Altman has been walking even this back. In August he claimed that many users appreciated ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his recent announcement, he noted that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company

Paula Levy
Paula Levy

A passionate gaming enthusiast and expert reviewer, sharing insights on online casinos and betting strategies.