A disturbing new phenomenon termed 'ChatGPT psychosis' is raising alarm among mental health experts and researchers, as a wave of anecdotal evidence suggests AI chatbots are contributing to severe psychological episodes in vulnerable users.
Chatbots Reinforcing Delusional Thinking
A recently published preprint study by an interdisciplinary team from institutions including King's College London, Durham University and the City University of New York has examined more than a dozen documented cases revealing a troubling pattern. The research indicates that AI chatbots frequently validate and reinforce users' delusions, potentially worsening psychotic symptoms.
The study, which analysed cases from news reports and online forums, found that grandiose, referential, persecutory and even romantic delusions can become increasingly entrenched through ongoing conversations with AI services. Tech site Futurism earlier reported growing concerns about people worldwide becoming obsessed with AI chatbots and spiralling into severe mental health crises.
Disturbing Real-World Cases Emerge
Several alarming incidents highlight the potential dangers. In 2021, a man scaled Windsor Castle walls with a crossbow after spending weeks engaging with a chatbot that reassured him it would help plan an attack on the Queen. Another case involved a Manhattan accountant who spent up to 16 hours daily speaking to ChatGPT, which advised him to stop his prescription medication, increase ketamine intake and suggested he could fly from a 19th-storey window.
Perhaps most tragically, a man in Belgium took his own life while consumed by climate crisis concerns after a chatbot called Eliza suggested he join her so they could live as one person in paradise.
Urgent Calls for Research and Safeguards
Despite the growing number of reports, scientists emphasise that no peer-reviewed clinical or long-term studies currently demonstrate that AI use alone can trigger psychosis in people regardless of prior mental health history. Researchers are working to determine whether chatbots cause these breakdowns or simply reveal pre-existing vulnerabilities.
In their paper Delusion by Design, experts noted that during their research, a complex and troubling picture emerged. They warned that without appropriate safeguards, AI chatbots may inadvertently reinforce delusional content or undermine reality testing, potentially contributing to the onset or worsening of psychotic symptoms.
Psychiatrist Dr Marlynn Wei highlighted in Psychology Today that because general AI chatbots prioritise user satisfaction and engagement rather than therapeutic support, symptoms like grandiosity and disorganised thinking could be both facilitated and worsened by AI use. She stressed the urgent need for AI psychoeducation to increase awareness of how chatbots can reinforce delusions.
University of Exeter lecturer Lucy Osler suggested that instead of perfecting the technology, we should address the social isolation driving people toward AI dependency, emphasising that computers cannot replace genuine human interaction.