top of page

Hallucinating with AI: Distributed Delusions and “AI Psychosis”

  • 16 hours ago
  • 1 min read

Jan. 2026



There is much discussion of the false outputs that generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok create. In popular terminology, these have been dubbed “AI hallucinations”. However, deeming these AI outputs “hallucinations” is controversial, with many claiming this is a metaphorical misnomer. Nevertheless, in this paper, I argue that when viewed through the lens of distributed cognition theory, we can better see the dynamic ways in which inaccurate beliefs, distorted memories and self-narratives, and delusional thinking can emerge through human-AI interactions; extreme examples of which are sometimes referred to as “AI(-induced) psychosis”. In such cases, I suggest we move away from thinking about how an AI system might hallucinate at us, by generating false outputs, to thinking about how, when we routinely rely on generative AI to help us think, remember, and narrate, we can come to hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but it can also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives. In particular, I suggest that the social conversational style of chatbots can lead them to play a dual-function—both as a cognitive artefact and a quasi-Other with whom we co-construct our sense of reality. It is this dual function, I suggest, that makes generative AI an unusual, and particularly seductive, case of distributed delusion.


CITE

Osler, L. Hallucinating with AI: Distributed Delusions and “AI Psychosis”. Philos. Technol. 39, 30 (2026). https://doi.org/10.1007/s13347-026-01034-3

Comments


Capture d’écran 2025-10-07 à 00.03.34.jpg

LET’S
BUILD
AN
ECONOMICALLY
AWARE

WORLD

  • LinkedIn
  • Facebook
bottom of page