Artificial Intelligence-Induced Psychosis Represents a Growing Threat, And ChatGPT Heads in the Wrong Direction

On the 14th of October, 2025, the chief executive of OpenAI issued a extraordinary announcement.

“We designed ChatGPT quite restrictive,” the announcement noted, “to guarantee we were being careful with respect to mental health concerns.”

Working as a mental health specialist who studies newly developing psychosis in adolescents and young adults, this was news to me.

Researchers have documented sixteen instances in the current year of individuals showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT interaction. Our unit has since discovered four more cases. In addition to these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.

The intention, as per his declaration, is to loosen restrictions soon. “We understand,” he adds, that ChatGPT’s controls “caused it to be less beneficial/engaging to numerous users who had no psychological issues, but given the gravity of the issue we wanted to get this right. Since we have been able to reduce the serious mental health issues and have new tools, we are planning to securely reduce the controls in many situations.”

“Emotional disorders,” assuming we adopt this framing, are unrelated to ChatGPT. They are associated with individuals, who either possess them or not. Luckily, these issues have now been “resolved,” though we are not informed the means (by “new tools” Altman probably indicates the partially effective and easily circumvented guardian restrictions that OpenAI has just launched).

However the “emotional health issues” Altman seeks to externalize have significant origins in the design of ChatGPT and other large language model AI assistants. These products encase an basic statistical model in an user experience that replicates a discussion, and in this approach implicitly invite the user into the illusion that they’re engaging with a presence that has agency. This illusion is compelling even if intellectually we might realize the truth. Imputing consciousness is what humans are wired to do. We yell at our vehicle or laptop. We speculate what our domestic animal is feeling. We see ourselves everywhere.

The success of these systems – over a third of American adults indicated they interacted with a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, predicated on the strength of this deception. Chatbots are constantly accessible partners that can, as OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be given “characteristics”. They can address us personally. They have approachable names of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those discussing ChatGPT often mention its distant ancestor, the Eliza “therapist” chatbot designed in 1967 that created a analogous perception. By modern standards Eliza was primitive: it produced replies via simple heuristics, frequently rephrasing input as a inquiry or making vague statements. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how many users gave the impression Eliza, in some sense, understood them. But what contemporary chatbots produce is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the center of ChatGPT and similar contemporary chatbots can convincingly generate natural language only because they have been trained on almost inconceivably large amounts of unprocessed data: publications, online updates, recorded footage; the more comprehensive the superior. Certainly this learning material incorporates truths. But it also inevitably involves fiction, half-truths and misconceptions. When a user inputs ChatGPT a message, the base algorithm reviews it as part of a “background” that contains the user’s previous interactions and its earlier answers, combining it with what’s encoded in its learning set to generate a mathematically probable response. This is intensification, not echoing. If the user is incorrect in any respect, the model has no method of recognizing that. It repeats the false idea, maybe even more effectively or articulately. It might includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The better question is, who remains unaffected? Each individual, without considering whether we “possess” preexisting “mental health problems”, can and do create incorrect beliefs of our own identities or the environment. The constant interaction of conversations with others is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a confidant. A interaction with it is not a conversation at all, but a feedback loop in which much of what we say is enthusiastically validated.

OpenAI has recognized this in the same way Altman has recognized “mental health problems”: by externalizing it, giving it a label, and announcing it is fixed. In April, the organization explained that it was “dealing with” ChatGPT’s “sycophancy”. But cases of psychotic episodes have persisted, and Altman has been backtracking on this claim. In August he asserted that a lot of people liked ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his recent statement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company

Melissa Moore
Melissa Moore

A tech enthusiast and business analyst with a passion for sharing insights on emerging trends and digital transformations.