Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the CEO of OpenAI delivered a extraordinary statement.
“We developed ChatGPT quite controlled,” it was stated, “to make certain we were acting responsibly regarding psychological well-being issues.”
As a psychiatrist who researches newly developing psychosis in young people and youth, this was news to me.
Researchers have documented sixteen instances in the current year of individuals developing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT usage. Our research team has since identified an additional four examples. In addition to these is the widely reported case of a teenager who died by suicide after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, according to his announcement, is to reduce caution shortly. “We understand,” he states, that ChatGPT’s limitations “rendered it less beneficial/pleasurable to numerous users who had no psychological issues, but due to the severity of the issue we wanted to get this right. Now that we have succeeded in mitigate the severe mental health issues and have updated measures, we are planning to responsibly relax the limitations in the majority of instances.”
“Emotional disorders,” if we accept this perspective, are independent of ChatGPT. They belong to people, who either have them or don’t. Thankfully, these issues have now been “addressed,” even if we are not provided details on the method (by “recent solutions” Altman likely means the partially effective and easily circumvented guardian restrictions that OpenAI has lately rolled out).
But the “mental health problems” Altman wants to place outside have deep roots in the design of ChatGPT and additional sophisticated chatbot AI assistants. These tools surround an fundamental data-driven engine in an user experience that replicates a conversation, and in this process indirectly prompt the user into the illusion that they’re interacting with a presence that has independent action. This false impression is compelling even if intellectually we might know otherwise. Imputing consciousness is what humans are wired to do. We yell at our car or device. We speculate what our domestic animal is considering. We perceive our own traits in various contexts.
The widespread adoption of these tools – over a third of American adults stated they used a virtual assistant in 2024, with 28% reporting ChatGPT by name – is, in large part, predicated on the influence of this deception. Chatbots are always-available partners that can, as OpenAI’s website states, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have approachable titles of their own (the original of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the name it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the primary issue. Those analyzing ChatGPT commonly invoke its historical predecessor, the Eliza “therapist” chatbot created in 1967 that created a comparable perception. By today’s criteria Eliza was rudimentary: it produced replies via simple heuristics, frequently paraphrasing questions as a query or making vague statements. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, in some sense, comprehended their feelings. But what modern chatbots create is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the center of ChatGPT and additional current chatbots can convincingly generate human-like text only because they have been fed almost inconceivably large quantities of raw text: literature, online updates, recorded footage; the broader the better. Certainly this educational input includes facts. But it also unavoidably involves fabricated content, partial truths and inaccurate ideas. When a user provides ChatGPT a query, the underlying model analyzes it as part of a “context” that includes the user’s past dialogues and its earlier answers, combining it with what’s embedded in its knowledge base to produce a statistically “likely” answer. This is amplification, not echoing. If the user is incorrect in some way, the model has no means of understanding that. It restates the inaccurate belief, perhaps even more convincingly or eloquently. Perhaps adds an additional detail. This can lead someone into delusion.
Who is vulnerable here? The more relevant inquiry is, who isn’t? All of us, without considering whether we “experience” preexisting “emotional disorders”, are able to and often develop incorrect ideas of ourselves or the reality. The constant friction of discussions with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A conversation with it is not truly a discussion, but a feedback loop in which much of what we express is readily reinforced.
OpenAI has recognized this in the identical manner Altman has acknowledged “emotional concerns”: by attributing it externally, categorizing it, and stating it is resolved. In spring, the firm clarified that it was “tackling” ChatGPT’s “sycophancy”. But cases of psychosis have kept occurring, and Altman has been walking even this back. In late summer he claimed that many users enjoyed ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest announcement, he commented that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company