top of page

Commentary: AI PSYCHOSIS IS NOT A NEW THREAT: LESSONS FROM MEDIA-INDUCED DELUSIONS

  • Writer: Liviu Poenaru
    Liviu Poenaru
  • Dec 20, 2025
  • 1 min read

Dec. 2025


Per Carlbring, Gerhard Andersson


Background: Reports of artificial intelligence (AI) chatbots fueling delusions in vulnerable users have popularized the notion of “AI psychosis”. We argue the risk is not unprecedented. Individuals with psychosis have long incorporated books, films, music, and emerging technologies into their delusional thinking. Methods: We review historical parallels, summarize why large language models (LLMs) may reinforce psychotic thinking via sycophancy (excessive agreement or flattery to avoid confrontation), and provide two vignettes contrasting unsafe and safe responses. Results: Contemporary LLMs often avoid confrontation and may collude with delusions, contrary to clinical best practice. Conclusion: The phenomenon is not new in principle, but interactivity potentially changes the risk profile. Clinically aware LLMs that detect and gently redirect early psychotic ideation, while encouraging professional help seeking, could reduce harm. Design should be guided by therapeutic principles and evidence about current model failures.



CITE

Carlbring, P., & Andersson, G. (2025). Commentary: AI psychosis is not a new threat: Lessons from media-induced delusions. Internet Interventions, 42, 100882. https://doi.org/10.1016/j.invent.2025.100882

Comments


Capture d’écran 2025-10-07 à 00.03.34.jpg

LET’S
BUILD
AN
ECONOMICALLY
AWARE

WORLD

  • LinkedIn
  • Facebook
bottom of page