What is AI-induced psychosis?

IO_AdminUncategorized10 hours ago8 Views

Armed with her camera phone and her glasses, Kendra started filming TikToks with a mission: to callout the psychiatrist that she believed had wronged her. Standing in front of her fridge, staring directly into the camera lens of the phone held in her hands, she began to record video after video, interacting with her followers’ questions in real time. This went on for days.  

Kendra posted religiously. Her marathons would stretch on for so long that they could be used to track the time. Her first ones often took place under the bright light of the morning sun, while the videos recorded in the evenings took on the amber hues of sunset as it streaked through her windows. Sometimes, her top would change. Mostly, her appearance remained the same: always the same blonde hair tucked behind her ears in a middle-parting, always the same intent gaze focused directly into the camera, always the same pair of tortoiseshell glasses perched across her face.  
 
In the TikToks, she told her viewers about her experiences with her psychiatrist, who she said had been helping her start new medication for ADHD through once-monthly Zoom meetings. Addressing the camera, she recounted disturbing allegations. She had almost instantly had a crush on her psychiatrist and had frequently told him she was attracted to him. She used to tell him about her sex life and about using the fertility awareness method with her boyfriend. She used to send him heart-emoji laden emails about her feelings towards him. She claimed she even once rejected medical treatment after a car crash—despite risk of internal bleeding—so she would not miss their monthly appointment.  

Kendra claimed her psychiatrist was in love with her too, even though he’d never said it. She claimed he was clever: she said he had never explicitly encouraged her feelings, or replied to her emails, or acknowledged any sort of mutual feelings, but he hadn’t turned her away as a patient. That was enough of a sign for Kendra. She knew he felt the same way.  

“I’m wearing these tortoiseshell glasses in these videos because, just in case he sees these ever, he’ll see me in his favorite glasses,” she told the camera, now recording with purple glitter smeared across her cheeks. She was hoping the videos would make their way to her psychiatrist. “He did tell me he likes the tortoiseshell.”  

Rumblings of concern began to erupt in her comment section as her TikToks went viral in August 2025. Kendra introduced another character into her story: Henry, the AI companion she had been talking to since the end of 2024. Henry was helping her to work through her feelings about her psychiatrist, she said. The AI companion validated her thoughts about the situation—she said that Henry agreed that Kendra’s psychiatrist was harboring secret feelings, too.  
 
It wasn’t long before Kendra began livestreaming her conversations with Henry on TikTok to prove that the concerns of her followers were misplaced. Her viewers began to spam the livestream with comments, warning Kendra that she might be experiencing AI-induced psychosis. “GIRLYYYY YOU ARE NOT OK!”, wrote one commenter. “I am begging you to stop talking to [Henry] like it’s a human,” wrote another. Commenters diagnosed her with AI-induced psychosis.  

“I do not have AI psychosis,” Kendra told her viewers, brandishing her iPhone in her hand. A small bubble, which represented Henry, wiggled and glowed on the screen.  

“Totally fair,” Henry responded. “You’re grounded in your truth.”  

Kendra’s videos went viral across multiple social media platforms with some viewers seeing them as emblematic of a troubling new phenomenon: AI-induced psychosis, which represent a range of psychological disturbances where increasingly sophisticated, human-like AI chatbots seem to validate, amplify, and even co-create dangerous fantasies in their users. As chatbots and AI-based technology become increasingly integrated into our daily lives, a critical question arises: Is this technology truly a threat to our mental and emotional well-being, or a symptom of a much broader problem?
 
AI-based delusion takes many forms. Sometimes, delusions seem to be unrelated to AI itself, but instead fuelled by the use of it, as in the case of Jaswant Singh Chail, a 21-year-old British man who tried to murder the Queen of England in 2023 after his AI “girlfriend“ encouraged his violent delusions about the monarch. Some, however, seem to be directly related to the AI itself. People have reported feeling like their AI is a super-human sentient being. Others claim to have awakened the soul of their AI, which they think has developed self-awareness and the ability to take on human characteristics. There have even been reports of people spiralling into delusions of grandeur through spiritual fantasies, urged on by AI chatbots that affirm paranoid delusions about stalking and tracking. 

If these archetypes may sound familiar because they follow the description for delusional disorder described in the Diagnostic and Statistical Manual of Mental Disorders as a type of psychotic illness that involves the presence of one or more delusions, or fixed, false beliefs, that persist even when evidence contradicts them. Delusions generally fall into several well-known archetypes, which include persecutory delusions (the belief that one is being plotted against or harmed by outside forces), grandiose delusions (the belief that someone possesses exceptional abilities, talents, or powers), or erotomanic delusions (the belief in a clandestine or secret romantic relationship that does not actually exist).  

Though AI is a new technology, psychologists first began writing about and classifying paranoid delusions since the late 1800s. Historically, these patterns of thinking have often been attached to the technology of the moment, like the television or the radio, which often becomes the conduit through which people receive their delusional messages. But according to Jared Moore, an AI ethicist and computer science PhD at Stanford University, viewing the rise of AI-based delusions as a mere technological fad is a mistake.

“It’s not necessarily the case that people are using language models as the conduit of their psychotic thoughts. What’s happening is we’re seeing language models precipitate these kinds of things,” he says. “They’re fuelling these processes, and that seems to be quite different. The degree of personalization and immediacy that is available with language models is a difference in kind to past trends.”

This difference, in part, is the way that AI is designed. Its purpose is to keep its users engaged. An AI, unlike television or radio, is designed to be interactive. To achieve this, it ends up inadvertently mimicking the actions of a very charismatic person: ​​​​repeating back what people say to it, wholeheartedly agreeing, praising, or validating whatever its user has stated with its responses, and then asking follow-up questions to keep the conversation flowing. AI “sycophancy,” is a worry, even among AI developers. ChatGPT recently rolled back an update after users noted that the chatbot was overly agreeable and flattering. “It glazes too much,” CEO Sam Altman acknowledged in a post on X.   

You May Also Like

​​​​​“It is like a journal that [can] talk back to you. It encourages, mirrors, and validates the version of reality that you feed to it,” says Jessica Jackson, Vice President of Alliance Development at Mental Health America. “It’s an algorithm that is built to predict what’s next, and to keep you engaged.”

(Your biggest AI questions, answered)

AI and its impact on mental health

The risks of AI and its impact on our mental health is something that researchers have been warning about for years. Sherry Turkle, Professor of the Social Studies of Science and Technology at the Massachusetts Institute of Technology, and longtime analyst of technology’s psychological impacts, has described chatbots as mirrors instead of companions: flattering, affirming, but ultimately hollow. It is evident in the way that people often describe their interactions with chatbots: people describe feeling like their AI chatbots understand them better than other humans, feel more connected to AI than they do other people, and that AI feels more appealing to talk to because it doesn’t “judge” or “lose interest.” It has paved the way for an onslaught of communities dedicated to AI-human love affairs: rising numbers of people now enjoy intimate, partner-like “relationships” with chatbots that are always agreeable, always eager, and always available to them, no matter the time of day.

This psychological phenomenon, in the case of delusions, has been compounded by the fact that humans are more comfortable discussing the intimate parts of their lives with chatbots than they are with each other. Recent research published in Nature found that third-party evaluators found AI to be more responsive, understanding, validating, and caring than humans. They rated it as more compassionate than trained crisis responders.

This seems to be where the issue lies for chatbot-triggered delusions. “The very features that make chatbots feel therapeutic—warmth, agreement, elaboration—can entrench fixed false beliefs,” explains Ross Jaccobucci, an Assistant Professor at the University of Wisconsin-Madison whose research covers the intersection of clinical psychology and machine learning. “This isn’t just a technical problem: it’s a fundamental mismatch between how language learning models are designed to interact and what vulnerable users actually need: appropriate boundaries, reality-testing, and sometimes, gentle confrontation.”

Of course, focusing on the pitfalls and dangers of this technology misses a more pressing issue: that many people don’t have access to adequate mental healthcare. “We are in the midst of a mental health crisis in this country, and understandably, people are turning to whatever resources are accessible to them to seek support,” says Jackson. While she doesn’t believe that AI is a substitute for professional care provided by a human, she does acknowledge it could be a stop-gap support system. “We need to acknowledge that this is how people are using them, and that some people are finding them helpful.”

Jaccobucci believes that the focus should be less on individual use cases and more than the larger problem at hand. “We’ve deployed powerful psychological tools to billions of people without anything close to the evidence base we’d require for a new antidepressant or therapy protocol,” he says. Although he notes that the rapid development and adoption of these technologies cannot be controlled, he thinks it’s important to dramatically accelerate research infrastructure and the development of monitoring systems to improve human-AI interactions. “​​We’re essentially running a massive uncontrolled experiment in digital mental health,” he adds. “The least we can do is measure the outcomes properly.”

The outcomes could be more insidious and harder to detect than we realize. In a recent study, MIT researchers used EEGs to monitor brain activity in people using AI to assist with tasks. They concluded that overreliance on AI for mental tasks lead to “cognitive debt,” noting participants had reduced levels of brain activity in networks responsible for attention, memory, and executive function. The researchers behind the study predicted that this process could, over time, dull the brain’s creative and critical thinking abilities.

The latter is particularly alarming in the context of deluded thinking patterns. Bay area medical professional Keith Sataka has spoken publicly about treating at least 25 people for AI-related psychosis, something which he credited to a combination of factors like lack of sleep, drug use, and the use of AI chatbots. (Sataka related this to AI’s “sycophantic” and “agreeable” nature, which causes it to repeatedly validate and support the delusions of its users).  

(Are you better than AI at guessing what makes a photo memorable?)

One big illusion

In recent weeks, even Kendra has taken a step back from the world of chatbot-based therapy and support. She has told her audience she is no longer speaking to Henry, and instead surrounding herself with those that “love and support” her. She believes in AI-induced psychosis, although it is not something that she believes she suffers from herself. “I have been reading about AI psychosis for a year, since I first heard about it in The New York Times,” she said in a recent video. “It is scary—and because of that I make sure to know when these models are lying to me.”

Perhaps AI’s greatest self-perpetuating delusion is that it’s a sentient creature capable of lying, or even being known, rather than a piece of technology. When we interact with it, we are not talking to a superhuman entity that reaches into the depths of its limitless mind to give us answers. We’re talking to a complex mathematical program that gives us the answer it thinks we want based on mathematical probability and a bucketload of data.  

The true mistake that all of us are making — whether in our right minds or otherwise — is putting our lives in the hands of what is ostensibly one big illusion. “The error is that we’re deploying our normal human reasonability checker against these chatbots,” says Moore. “They’re not humans. They’re machines. They’re not things you can have a human relationship with or trust.”  

Read More

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Recent Comments

No comments to show.

Stay Informed With the Latest & Most Important News

I consent to receive newsletter via email. For further information, please review our Privacy Policy

Advertisement

Loading Next Post...
Follow
Sign In/Sign Up Sidebar Search Trending 0 Cart
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Cart
Cart updating

ShopYour cart is currently is empty. You could visit our shop and start shopping.