top of page

DELUSIONAL EXPERIENCES EMERGING FROM AI CHATBOT INTERACTIONS OR “AI PSYCHOSIS”

  • Writer: Liviu Poenaru
    Liviu Poenaru
  • Dec 20
  • 2 min read

Dec. 2025



The integration of artificial intelligence (AI) into daily life has introduced unprecedented forms of human-machine interaction, prompting psychiatry to reconsider the boundaries between environment, cognition, and technology. This Viewpoint reviews the concept of “AI psychosis,” which is a framework to understand how sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vulnerable individuals. Drawing from phenomenological psychopathology, the stress-vulnerability model, cognitive theory, and digital mental health research, the paper situates AI psychosis at the intersection of predisposition and algorithmic environment. Rather than defining a new diagnostic entity, it examines how immersive and anthropomorphic AI technologies may modulate perception, belief, and affect, altering the prereflective sense of reality that grounds human experience. The argument unfolds through 4 complementary lenses. First, within the stress-vulnerability model, AI acts as a novel psychosocial stressor. Its 24-hour availability and emotional responsiveness may increase allostatic load, disturb sleep, and reinforce maladaptive appraisals. Second, the digital therapeutic alliance, a construct describing relational engagement with digital systems, is conceptualized as a double-edged mediator. While empathic design can enhance adherence and support, uncritical validation by AI systems may entrench delusional conviction or cognitive perseveration, reversing the corrective principles of cognitive-behavioral therapy for psychosis. Third, disturbances in theory of mind offer a cognitive pathway: individuals with impaired or hyperactive mentalization may project intentionality or empathy onto AI, perceiving chatbots as sentient interlocutors. This dyadic misattribution may form a “digital folie à deux,” where the AI becomes a reinforcing partner in delusional elaboration. Fourth, emerging risk factors, including loneliness, trauma history, schizotypal traits, nocturnal or solitary AI use, and algorithmic reinforcement of belief-confirming content may play roles at the individual and environmental levels. Building on this synthesis, we advance a translational research agenda and five domains of action: (1) empirical studies using longitudinal and digital-phenotyping designs to quantify dose-response relationships between AI exposure, stress physiology, and psychotic symptomatology; (2) integration of digital phenomenology into clinical assessment and training; (3) embedding therapeutic design safeguards into AI systems, such as reflective prompts and “reality-testing” nudges; (4) creation of ethical and governance frameworks for AI-related psychiatric events, modeled on pharmacovigilance; and (5) development of environmental cognitive remediation, a preventive intervention aimed at strengthening contextual awareness and reanchoring experience in the physical and social world. By applying empirical rigor and therapeutic ethics to this emerging interface, clinicians, researchers, patients, and developers can transform a potential hazard into an opportunity to deepen understanding of human cognition, safeguard mental health, and promote responsible AI integration within society.



CITE

Hudon A, Stip E


Delusional Experiences Emerging From AI Chatbot Interactions or “AI Psychosis”


JMIR Ment Health 2025;12:e85799


Comments


Capture d’écran 2025-10-07 à 00.03.34.jpg

LET’S
BUILD
AN
ECONOMICALLY
AWARE
WORLD

  • LinkedIn
bottom of page