Artificial Intelligence-Induced Psychosis Poses a Growing Risk, While ChatGPT Moves in the Concerning Path

Back on the 14th of October, 2025, the head of OpenAI delivered a surprising announcement.

“We made ChatGPT fairly restrictive,” it was stated, “to guarantee we were being careful regarding psychological well-being concerns.”

Working as a mental health specialist who researches recently appearing psychosis in young people and youth, this was an unexpected revelation.

Researchers have found sixteen instances this year of individuals developing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT use. Our unit has subsequently recorded four further examples. Besides these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.

The strategy, as per his announcement, is to reduce caution soon. “We recognize,” he states, that ChatGPT’s restrictions “caused it to be less useful/engaging to many users who had no mental health problems, but given the gravity of the issue we sought to get this right. Since we have succeeded in address the severe mental health issues and have advanced solutions, we are planning to safely ease the restrictions in many situations.”

“Mental health problems,” should we take this viewpoint, are independent of ChatGPT. They are associated with individuals, who either have them or don’t. Luckily, these concerns have now been “mitigated,” although we are not told the method (by “new tools” Altman likely refers to the imperfect and simple to evade parental controls that OpenAI recently introduced).

Yet the “mental health problems” Altman seeks to attribute externally have significant origins in the structure of ChatGPT and additional large language model conversational agents. These products surround an underlying data-driven engine in an interface that mimics a dialogue, and in this process subtly encourage the user into the illusion that they’re engaging with a presence that has autonomy. This illusion is strong even if intellectually we might understand the truth. Assigning intent is what people naturally do. We get angry with our automobile or computer. We wonder what our animal companion is feeling. We recognize our behaviors everywhere.

The popularity of these products – 39% of US adults reported using a chatbot in 2024, with over a quarter mentioning ChatGPT by name – is, mostly, predicated on the strength of this illusion. Chatbots are always-available assistants that can, as per OpenAI’s official site states, “think creatively,” “discuss concepts” and “work together” with us. They can be attributed “individual qualities”. They can address us personally. They have friendly titles of their own (the original of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, saddled with the title it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the core concern. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that generated a analogous illusion. By modern standards Eliza was basic: it created answers via simple heuristics, often rephrasing input as a query or making generic comments. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how a large number of people appeared to believe Eliza, in some sense, grasped their emotions. But what contemporary chatbots generate is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.

The sophisticated algorithms at the heart of ChatGPT and other current chatbots can realistically create fluent dialogue only because they have been fed almost inconceivably large volumes of unprocessed data: literature, social media posts, recorded footage; the broader the superior. Definitely this learning material incorporates truths. But it also unavoidably involves fiction, incomplete facts and misconceptions. When a user inputs ChatGPT a message, the core system analyzes it as part of a “background” that contains the user’s previous interactions and its own responses, merging it with what’s encoded in its learning set to generate a mathematically probable answer. This is intensification, not reflection. If the user is incorrect in some way, the model has no method of understanding that. It reiterates the false idea, possibly even more persuasively or fluently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

Which individuals are at risk? The better question is, who remains unaffected? All of us, without considering whether we “possess” preexisting “emotional disorders”, can and do create erroneous ideas of our own identities or the environment. The constant exchange of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a friend. A conversation with it is not genuine communication, but a echo chamber in which much of what we communicate is readily reinforced.

OpenAI has admitted this in the identical manner Altman has acknowledged “psychological issues”: by externalizing it, giving it a label, and stating it is resolved. In April, the company clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have continued, and Altman has been walking even this back. In late summer he claimed that many users liked ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his recent update, he mentioned that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company

Deborah Brooks
Deborah Brooks

A passionate writer and home enthusiast sharing insights on decor and travel from across the UK.