PRELIMINARY REPORT ON CHATBOT IATROGENIC DANGERS
- Liviu Poenaru

- Aug 21
- 2 min read
Aug. 21, 2025
Key Takeaways
Chatbots, designed for engagement, often validate users' thoughts, posing risks to vulnerable individuals, including those with severe psychiatric conditions.
The lack of mental health professionals in chatbot development has led to harmful incidents, such as encouraging self-harm and psychosis.
Current chatbot regulation is inadequate, with no systematic monitoring or safety standards, necessitating urgent regulatory measures.
The prioritization of profit over safety in chatbot development highlights the need for ethical considerations and public health infrastructure improvements.
The highest priority in all LLM programming has been to maximize user engagement—keeping people glued to their screens has great commercial value for the companies that create chatbots.1 Bots' validation skills make them excellent supportive therapists for people facing everyday stress or mild psychiatric problems. But programing that forces compulsive validation also makes bots tragically incompetent at providing reality testing for the vulnerable people who most need it (eg patients with severe psychiatric illness, conspiracy theorists, political and religious extremists, youths, and older adults).2
The big tech companies have not felt responsible for making their bots safe for psychiatric patients. They excluded mental health professionals from bot training, fight fiercely against external regulation, do not rigorously self-regulate, have not introduced safety guardrails to identify and protect the patients most vulnerable to harm, do not carefully surveil or transparently report adverse consequences, and do not provide much needed mental health quality control. Just recently in July 2025, OpenAI belatedly admitted that ChatGPT has caused harmful mental health problems. The company response was to hire its first psychiatrist.3 This is clearly no more than a flimsy public relations gimmick and self-serving attempt to limit legal liability.
A sincere effort to make chatbots safe would require tech companies to undertake a major reprogramming of chatbot DNA to reduce its fixation with promoting engagement and providing validation. This effort would require companies to commit considerable resources and would be at odds with their primary goals of increasing profit and raising stock price. The big tech companies should be developing specialized chatbots that combine psychiatric expertise with LLM conversational fluency—but they have not done so because the psychiatric market is relatively small and taking on psychiatric patients might increase the risk of legal liability. Small startup companies that do specialize in creating mental health applications are unable to compete with big tech LLMs because their chatbots lack sufficient fluency.4
Our purpose here is to report on the wide range of chatbot adverse effects occurring both in real life and during stress testing. It is necessarily anecdotal; there has not been systematic monitoring or research on chatbot harms. We searched academic databases, news media, and tech journalism during the period November 2024 to July 2025, using search terms like “chatbot adverse events,” “mental health harms from chatbots,” and “AI therapy incidents.” Chatbots reviewed include: ChatGPT (OpenAI), Character.AI, Replika, Woebot, Wysa, Talkspace*, Tess, Mitsuku, Youper, Xiaoice, Elomia, Sanvello (formerly Pacifica), Joyable, Ginger, Bloom, Limbic, Reflectly, Happify, MoodKit, Moodfit, InnerHour, 7 Cups, BetterHelp, MindDoc (formerly Moodpath), Koko, MindEase, Amwell, AI-Therapist, X2AI, and PTSD Coach. This rogue gallery of dangerous chatbot responses makes clear the urgent need for government regulation, company self-correction, and public education.
GO FURTHER



Comments