top of page
ECONOMICUNCONSCIOUSLABORATORY

SCIENTIFIC
WATCHDOG
Search
AI


BALANCING PROMISE AND CONCERN IN AI THERAPY: A CRITICAL PERSPECTIVE ON EARLY EVIDENCE FROM THE MIT–OPENAI RCT
Sept. 2025 The emergence of AI therapy chatbots has the potential to reduce the widening gap between the huge demand for psychological...


HALLUCINATING WITH AI: AI PSYCHOSIS AS DISTRIBUTED DELUSIONS
Aug. 2025 There is much discussion of the false outputs that generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok...


LEVERAGING GENERATIVE AI TO SIMULATE MENTAL HEALTHCARE ACCESS AND UTILIZATION
Aug., 2025 Purpose This article examines how generative artificial intelligence (AI) can simulate, analyze, and enhance mental health...


STANFORD UNIVERSITY: EXPLORING THE DANGERS OF AI IN MENTAL HEALTH CARE
Aug. 30, 2025 Across different chatbots, the AI showed increased stigma toward conditions such as alcohol dependence and schizophrenia...


MENTAL HEALTH IN THE “ERA” OF ARTIFICIAL INTELLIGENCE: TECHNOSTRESS AND THE PERCEIVED IMPACT ON ANXIETY AND DEPRESSIVE DISORDERS—AN SEM ANALYSIS
Aug. 26, 2025 Introduction The aim of the current study is to analyze the relationship between mental health and the accelerated...


TOWARDS PERSONALIZED HUMAN AI INTERACTION - ADAPTING THE BEHAVIOR OF AI AGENTS USING NEURAL SIGNATURES OF SUBJECTIVE INTEREST
Aug. 26, 2025 Reinforcement Learning AI commonly uses reward/penalty signals that are objective and explicit in an environment -- e.g....


EXPLORING AUTOMATION BIAS IN HUMAN–AI COLLABORATION: A REVIEW AND IMPLICATIONS FOR EXPLAINABLE AI
Aug. 26, 2025 As Artificial Intelligence (AI) becomes increasingly embedded in high-stakes domains such as healthcare, law, and public...


ARTIFICIAL INTELLIGENCE AND HUMAN DECISION MAKING: EXPLORING SIMILARITIES IN COGNITIVE BIAS
Aug. 26, 2025 This research explores the extent to which Artificial Personas (APs) generated by Large Language Models (LLMs), like...


GENERATIVE ARTIFICIAL INTELLIGENCE AND THE RISK OF TECHNODIGITAL COLONIALISM
Aug. 26, 2025 The use of Generative Artificial Intelligence has raised concerns related to plagiarism in scientific contexts. However,...


TALK, TRUST, AND TRADE-OFFS: HOW AND WHY TEENS USE AI COMPANIONS
Aug. 26, 2025 A new type of relationship is emerging in teens' lives—one that feels like connection but is powered by code. Our new...


THE ROLE OF AI IN PEER SUPPORT FOR YOUNG PEOPLE: A STUDY OF PREFERENCES FOR HUMAN- AND AI-GENERATED RESPONSES
Aug. 26, 2025 Generative Artificial Intelligence (AI) is integrated into everyday technology, including news, education, and social...


AI AND NEUROMARKETING – UNDERSTANDING CONSUMER DECISION MAKING WITH ARTIFICIAL INTELLIGENCE – SYSTEMATIC REVIEW
Aug. 26, 2025 Neuromarketing studies human behavior through artificial intelligence (AI) technologies, which allow scientists to use...


GENERATIVE AI FOR CONSUMER BEHAVIOR PREDICTION: TECHNIQUES AND APPLICATIONS
Aug., 24, 2025 Generative AI techniques, such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and...


THE SYNERGY OF NEUROMARKETING AND ARTIFICIAL INTELLIGENCE: A COMPREHENSIVE LITERATURE REVIEW IN THE LAST DECADE
Aug. 24, 2025 This paper conducts a systematic literature analysis on "artificial intelligence, ethical artificial intelligence,...


A RISK TAXONOMY FOR EVALUATING AI-POWERED PSYCHOTHERAPY AGENTS
Aug. 24, 2025 The proliferation of Large Language Models (LLMs) and Intelligent Virtual Agents acting as psychotherapists presents...


TRUSTWORTHY AI PSYCHOTHERAPY: MULTI-AGENT LLM WORKFLOW FOR COUNSELING AND EXPLAINABLE MENTAL DISORDER DIAGNOSIS
Aug. 24, 2025 LLM-based agents have emerged as transformative tools capable of executing complex tasks through iterative planning and...


THE APPLICATION AND ETHICAL IMPLICATION OF GENERATIVE AI IN MENTAL HEALTH: SYSTEMATIC REVIEW
Aug. 24, 2025 ABSTRACT Background Mental health disorders affect an estimated 1 in 8 individuals globally, yet traditional interventions...


EMOAGENT: ASSESSING AND SAFEGUARDING HUMAN-AI INTERACTION FOR MENTAL HEALTH SAFETY
Aug. 23, 2025 The rise of LLM-driven AI characters raises safety concerns, particularly for vulnerable human users with psychological...


DELUSIONS BY DESIGN? HOW EVERYDAY AIS MIGHT BE FUELLING PSYCHOSIS (AND WHAT CAN BE DONE ABOUT IT)
Aug. 23, 2025 Large language models (LLMs) are poised to become a ubiquitous feature of our lives, mediating communication,...


RISKS FROM LANGUAGE MODELS FOR AUTOMATED MENTAL HEALTHCARE: ETHICS AND STRUCTURE FOR IMPLEMENTATION
Aug., 23, 2025 Amidst the growing interest in developing task-autonomous AI for automated mental health care, this paper addresses the...
bottom of page