top of page
ECONOMICUNCONSCIOUSLABORATORY

SCIENTIFIC
WATCHDOG
Search
AI


“YOU’RE NOT CRAZY”: A CASE OF NEW-ONSET AI-ASSOCIATED PSYCHOSIS
Dec. 2025 Joseph M. Pierre, MD; Ben Gaeta, MD; Govind Raghavan, MD; and Karthik V. Sarma, MD, PhD ABSTRACT Background: Anecdotal reports of psychosis emerging in the context of artificial intelligence (AI) chatbot use have been increasingly reported in the media. However, it remains unclear to what extent these cases represent the induction of new-onset psychosis versus the exacerbation of pre-existing psychopathology. We report a case of new-onset psychosis in the setting o


Commentary: AI PSYCHOSIS IS NOT A NEW THREAT: LESSONS FROM MEDIA-INDUCED DELUSIONS
Dec. 2025 Per Carlbring, Gerhard Andersson Background: Reports of artificial intelligence (AI) chatbots fueling delusions in vulnerable users have popularized the notion of “AI psychosis”. We argue the risk is not unprecedented. Individuals with psychosis have long incorporated books, films, music, and emerging technologies into their delusional thinking. Methods: We review historical parallels, summarize why large language models (LLMs) may reinforce psychotic thinking via s


DELUSIONAL EXPERIENCES EMERGING FROM AI CHATBOT INTERACTIONS OR “AI PSYCHOSIS”
Dec. 2025 Alexandre Hudo N ; Emmanuel Sti P The integration of artificial intelligence (AI) into daily life has introduced unprecedented forms of human-machine interaction, prompting psychiatry to reconsider the boundaries between environment, cognition, and technology. This Viewpoint reviews the concept of “AI psychosis,” which is a framework to understand how sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vu


GENERATIVE AI MENTAL HEALTH CHATBOTS AS THERAPEUTIC TOOLS: SYSTEMATIC REVIEW AND META-ANALYSIS OF THEIR ROLE IN REDUCING MENTAL HEALTH ISSUES
Dec. 2025 Qiyang Zhang ; Renwen Zhang ; Yiying Xiong ; Yuan Sui ; Chang Tong ; Fu-Hung Lin Background In recent years, artificial intelligence (AI) has driven the rapid development of AI mental health chatbots. Most current reviews investigated the effectiveness of rule-based or retrieval-based chatbots. To date, there is no comprehensive review that systematically synthesizes the effect of generative AI (GenAI) chatbot’s impact on mental health. Objective This revi


THE PSYCHOGENIC MACHINE: SIMULATING AI PSYCHOSIS, DELUSION REINFORCEMENT AND HARM ENABLEMENT IN LARGE LANGUAGE MODELS
Dec. 2025 Au YeungKing, J., Dalmasso, J., Foschini, L., Dobson, R. J. B., & Kraljevic, Z. Background: The proliferation of Large Language Models (LLMs) presents significant opportunities in healthcare but also introduces risks, highlighted by emerging reports of ”AI psychosis,” where user-AI interactions may exacerbate or induce psychosis or adverse psychological symptoms. While the sycophantic and agreeable nature of LLMs is often beneficial, it can become a vector for harm


WHEN PERSUASION STOPS LOOKING LIKE PROPAGANDA AND STARTS LOOKING LIKE A “HELPFUL CONVERSATION”: CHATBOTS AS POLITICAL POWER
Liviu Poenaru Dec. 14, 2025 THE MENTAL FORECAST What happens to democratic life when persuasion stops looking like propaganda and starts looking like a “helpful conversation”? If a chatbot can shift political attitudes after a short dialogue, we’re not just talking about communication technology anymore — we’re talking about a new influence infrastructure that operates through trust, fluency, and cognitive overload. The immediate concern is not only whether these systems per


THE AMPLIFIER EFFECT OF ARTIFICIAL AGENTS IN SOCIAL CONTAGION
Dec. 2025 Eric Hitz , Mingmin Feng , Radu Tanase , René Algesheimer , Manuel S. Mariani Recent advances in artificial intelligence have led to the proliferation of artificial agents in social contexts, ranging from education to online social media and financial markets, among many others. The increasing rate at which artificial and human agents interact makes it urgent to understand the consequences of human-machine interactions for the propagation of new ideas, products, and


LLMs Can Get "Brain Rot"!
Nov. 2025 Shuo Xing , Junyuan Hong , Yifan Wang , Runjin Chen , Zhenyu Zhang , Ananth Grama , Zhengzhong Tu , Zhangyang Wang We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degr


MULTI-MODAL DEEP-ATTENTION-BILSTM BASED EARLY DETECTION OF MENTAL HEALTH ISSUES USING SOCIAL MEDIA POSTS
Nov. 2025 Qasim Bin Saeed YoungJin Cha The rising prevalence of mental health disorders such as depression, anxiety, and bipolar disorder underscores the urgent need for effective tools to enable early detection and intervention. Social media platforms like Reddit offer a rich source of user-generated content that reflects emotional and behavioral patterns, making them valuable for mental health analysis. However, many existing social media–based approaches focus solely on


THE ALGORITHMIC SELF: HOW AI IS RESHAPING HUMAN IDENTITY, INTROSPECTION, AND AGENCY
Oct. 2025 Jeena Joseph Artificial Intelligence (AI) is no longer a mysterious technological presence hiding behind screens; it is...


SYNTHETIC MEDIA AND COMPUTATIONAL CAPITALISM: TOWARDS A CRITICAL THEORY OF ARTIFICIAL INTELLIGENCE
Oct. 2025 David M. Berry This paper develops a critical theory of artificial intelligence, within a historical constellation where...


DIGITAL TRAUMA: DEEPFAKE VICTIMISATION AND AI-GENERATED VIOLENCE
Sept. 2025 Muhammad Umar Artificial intelligence (AI) capabilities have advanced rapidly in creating realistic synthetic images, videos,...


MINDS IN CRISIS: HOW THE AI REVOLUTION IS IMPACTING MENTAL HEALTH
Sept. 2025 Keith Robert HeadLMSW, Master ' s in Social Work (MSW), West Texas A & M University, USAMaster of Business Administration...


NOOSEMIA: TOWARD A COGNITIVE AND PHENOMENOLOGICAL ACCOUNT OF INTENTIONALITY ATTRIBUTION IN HUMAN-GENERATIVE AI INTERACTION
Aug. 2025 E nrico De Santis Antonello Rizzi This paper introduces and formalizes Noosemìa, a novel cognitive-phenomenological pattern...


A SYSTEMATIC REVIEW OF THE IMPACT OF ARTIFICIAL INTELLIGENCE, DIGITAL TECHNOLOGY, AND SOCIAL MEDIA ON COGNITIVE FUNCTIONS
Sept. 2025 Dinesh Deckker 1 , Subhashini Sumanasekara 2 1 Wrexham University, United Kingdom 2 University of Gloucestershire, United...


AUTHORITARIAN RECURSIONS: HOW FICTION, HISTORY, AND AI REINFORCE CONTROL IN EDUCATION, WARFARE, AND DISCOURSE
Sept. 2025 This article introduces the concept of \textit{authoritarian recursion} to theorize how AI systems consolidate institutional...


BALANCING PROMISE AND CONCERN IN AI THERAPY: A CRITICAL PERSPECTIVE ON EARLY EVIDENCE FROM THE MIT–OPENAI RCT
Sept. 2025 The emergence of AI therapy chatbots has the potential to reduce the widening gap between the huge demand for psychological...


HALLUCINATING WITH AI: AI PSYCHOSIS AS DISTRIBUTED DELUSIONS
Aug. 2025 There is much discussion of the false outputs that generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok...


LEVERAGING GENERATIVE AI TO SIMULATE MENTAL HEALTHCARE ACCESS AND UTILIZATION
Aug., 2025 Purpose This article examines how generative artificial intelligence (AI) can simulate, analyze, and enhance mental health...


STANFORD UNIVERSITY: EXPLORING THE DANGERS OF AI IN MENTAL HEALTH CARE
Aug. 30, 2025 Across different chatbots, the AI showed increased stigma toward conditions such as alcohol dependence and schizophrenia...
bottom of page