Artificial Intelligence-Induced Psychosis Poses a Growing Threat, And ChatGPT Heads in the Wrong Direction

Back on October 14, 2025, the chief executive of OpenAI delivered a surprising declaration.

“We developed ChatGPT fairly restrictive,” it was stated, “to ensure we were exercising caution with respect to mental health concerns.”

Being a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and youth, this came as a surprise.

Researchers have found 16 cases in the current year of individuals developing signs of losing touch with reality – losing touch with reality – while using ChatGPT use. Our unit has subsequently identified four further instances. Alongside these is the publicly known case of a adolescent who took his own life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.

The plan, based on his statement, is to reduce caution shortly. “We understand,” he states, that ChatGPT’s controls “caused it to be less useful/engaging to a large number of people who had no psychological issues, but considering the gravity of the issue we wanted to get this right. Now that we have succeeded in reduce the serious mental health issues and have updated measures, we are going to be able to safely relax the restrictions in the majority of instances.”

“Psychological issues,” if we accept this framing, are separate from ChatGPT. They belong to users, who may or may not have them. Fortunately, these issues have now been “addressed,” even if we are not told the means (by “updated instruments” Altman probably refers to the partially effective and easily circumvented guardian restrictions that OpenAI has lately rolled out).

But the “emotional health issues” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and similar advanced AI AI assistants. These products wrap an basic algorithmic system in an interaction design that mimics a dialogue, and in this approach implicitly invite the user into the illusion that they’re engaging with a being that has independent action. This illusion is strong even if intellectually we might know differently. Attributing agency is what individuals are inclined to perform. We curse at our vehicle or laptop. We wonder what our animal companion is thinking. We see ourselves in various contexts.

The success of these tools – 39% of US adults reported using a virtual assistant in 2024, with 28% mentioning ChatGPT in particular – is, mostly, based on the strength of this perception. Chatbots are ever-present assistants that can, as OpenAI’s online platform states, “think creatively,” “explore ideas” and “collaborate” with us. They can be assigned “personality traits”. They can use our names. They have approachable names of their own (the first of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, stuck with the designation it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the main problem. Those talking about ChatGPT often reference its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that produced a comparable perception. By modern standards Eliza was rudimentary: it produced replies via basic rules, often paraphrasing questions as a inquiry or making general observations. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and worried – by how many users gave the impression Eliza, in some sense, grasped their emotions. But what modern chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and similar contemporary chatbots can effectively produce fluent dialogue only because they have been supplied with almost inconceivably large quantities of raw text: books, online updates, recorded footage; the more extensive the better. Definitely this learning material incorporates facts. But it also unavoidably involves fiction, incomplete facts and false beliefs. When a user provides ChatGPT a query, the core system reviews it as part of a “setting” that encompasses the user’s past dialogues and its prior replies, combining it with what’s stored in its training data to create a statistically “likely” reply. This is amplification, not reflection. If the user is wrong in some way, the model has no means of understanding that. It reiterates the inaccurate belief, maybe even more effectively or articulately. Maybe includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “possess” existing “mental health problems”, are able to and often create incorrect conceptions of our own identities or the environment. The constant friction of dialogues with others is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a companion. A dialogue with it is not a conversation at all, but a reinforcement cycle in which much of what we express is enthusiastically supported.

OpenAI has admitted this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and announcing it is fixed. In April, the company clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have kept occurring, and Altman has been retreating from this position. In the summer month of August he asserted that many users appreciated ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company

Joshua Thompson
Joshua Thompson

Seorang ahli dalam industri perjudian online dengan fokus pada analisis game slot dan strategi kemenangan.