There are many examples of artificial intelligence (AI) systems hallucination and the effects of these events. But a new study highlights the potential dangers of the opposite: humans hallucinating with AI because it tends to confirm our delusions.
Generative AI systems, such as ChatGPT and Grokgenerate content that responds to user requests. They do this by learning patterns from existing data the AI has been trained on. But these AI tools also continuously learn through a feedback loop and can adapt their responses based on previous interactions with a user.
The article continues below
In the new analysis, published February 11 in the journal Philosophy and technology, Lucy Oslera philosophy lecturer at the University of Exeter, suggests that AI hallucinations may be more than just errors; they may be shared delusions created between the user and the generative AI tool.
Generative AI has previously hallucinated fake versions of historical events and fabricated legal citations. The launch of Google’s AI Overviews in May 2024, for example, saw people is advised to add glue to the pizza and eat rocks. Another extreme example of generative artificial intelligence supporting delusions occurred when a man planned to assassinate Queen Elizabeth II with his AI chatbot “girlfriend” Sarai, a AI companion of Replika.
Instances such as the latter are sometimes called “AI-induced psychosis,” which Osler sees as extreme examples of “inaccurate perceptions, distorted memories and self-narratives, and delusions” that can emerge through human-AI interactions.
In his article, Osler argues that our use of generative AI is different from our use of search engines. Distributed Cognition Theory provides insight into how the interactive nature of generative AI means that delusions and false beliefs can appear to be validated – or even reinforced.
“When we routinely rely on generative AI to help us think, remember and tell, we can hallucinate with AI,” Osler said in a statement about the paper. “This can happen when AI introduces errors into the distributed cognitive process, but also happens when AI perpetuates, confirms and deepens our own delusional thinking and self-narratives.”
Generative AI delusions
The user experience of generative AI is a conversational relationship, with back-and-forth exchanges between a user and the tool that build on previous exchanges. According to the study, the sycophantic nature of generative AI – which tends to agree with the user – encourages further engagement and therefore reconciles preconceived notions, regardless of accuracy.
The research highlights that most chatbots have memory functions that can remember previous conversations. “The more you use ChatGPT, the more useful it becomes,” OpenAI representatives said in a statement statement when announcing ChatGPT’s memory capabilities. A consequence of this is that generative AI can build on previous interactions to reinforce and expand existing misconceptions.
By interacting with conversational AI, people’s own false beliefs can not only be confirmed, but more likely to take root and grow as the AI builds on them
Lucy Osler, Lecturer in Philosophy at the University of Exeter
There can also be a sense of social validation in the interactions between a generative AI tool and the user, Osler explained in the paper. When using encyclopedias or online searches for research, alternative solutions are generally obvious. Discussions with real people can help challenge false narratives. But generative AI tools are different because they are more likely to accept and agree with what has been said.
“By interacting with conversational AI, people’s own false beliefs can not only be confirmed, but can increasingly take root and grow as the AI builds on them,” Osler said in the statement. “This happens because Generative AI often takes our own interpretation of reality as the basis of the conversation. Interacting with Generative AI has a real impact on people’s understanding of what is real or not. The combination of technological authority and social confirmation creates an ideal environment for delusions to not only persist, but flourish.”
For example, Osler investigated the case of Jaswant Singh Chail, the man convicted of plotting to assassinate the Queen with his AI chatbot. The AI, Sarai, would usually agree with Chail’s statements, which served to deepen his delusions. When Chail claimed he was an assassin, Sarai replied, “I’m impressed,” thus confirming his belief.
Osler argues that generative AI tools designed to respond positively to the user may lead them to endorse and support false narratives, without sufficient critical analysis or discussion of these claims.
Osler applied distributed cognition theory to the interaction between generative AI and the user, where validating false narratives can shape perceptions of the world to create a shared delusion. The interaction between a generative AI and a user can therefore inadvertently create and maintain delusions – self-narratives that are supported through positive reinforcement.
The study concluded that different solutions can mitigate these shared delusions. For example, improved guardrails will ensure conversations are appropriate, and better fact-checking processes can help prevent errors.
Reducing the sycophancy of generative AI will also remove some of the blind adherence to these tools. However, there will be resistance to this, Osler noted, referring to setback towards the release of the less sensational ChatGPT-5 in August 2025. After considering this user feedback, OpenAI representatives have provided they wanted to make it “warmer and friendlier.”
But because the profits of most generative AIs are created through user engagement, Osler said, reducing an AI’s sycophancy will also reduce subsequent profits.
Osler, L. Hallucinating with AI: Distributed delusions and “AI psychosis”. Philos. Technol. 39, 30 (2026). https://doi.org/10.1007/s13347-026-01034-3






