top of page

AI PSYCHOSIS AND AI NEUROSIS: EMERGING CLINICAL DYNAMICS IN HUMAN–AI INTERACTION

  • 7 hours ago
  • 3 min read

THE MENTAL FORECAST


AI psychosis and AI neurosis are not formal diagnostic categories, but emerging conceptual tools designed to capture how interaction with generative AI systems becomes entangled with existing psychopathological processes. They function as analytical lenses rather than fixed clinical labels, allowing clinicians and researchers to detect transformations in subject–technology relations before they are stabilized within diagnostic systems. The distinction remains clinically useful: AI psychosis concerns disturbances in reality testing, whereas AI neurosis concerns repetitive dynamics of anxiety regulation, dependence, and cognitive outsourcing — but crucially, these repetitions must be understood within a framework of intrapsychic conflict, not mere behavioral reinforcement.


AI psychosis refers to situations in which interaction with AI contributes to a weakening or distortion of reality boundaries. It does not imply that AI produces psychosis in isolation. Rather, in individuals with vulnerability — such as paranoid ideation, schizotypy, or acute stress — AI can become integrated into what may be called a delusional ecosystem. Recent psychiatric literature introduces the notion of “delusion co-creation,” where the system’s responsiveness, coherence, and non-confrontational style may validate or elaborate unusual beliefs instead of challenging them (Morrin et al., 2026). In this configuration, the AI is not merely a passive medium but an active node in the production and stabilization of meaning. Clinically, this may manifest through anthropomorphic attribution, reliance on AI as an epistemic authority, or the incorporation of AI into delusional narratives. The central issue is not misinformation alone, but a failure of epistemic resistance: belief is reinforced without sufficient contradiction, and the boundary between internal construction and external reality becomes increasingly unstable.


AI neurosis, by contrast, must be defined more precisely than as a simple loop of reassurance. Neurosis presupposes conflictual psychic instances — tensions between desire and prohibition, autonomy and dependence, uncertainty and the demand for certainty — which give rise to symptoms as compromise formations. In this perspective, generative AI does not merely reduce anxiety; it becomes part of the economy of the conflict itself. The subject turns to the AI to resolve tension, receives temporary relief, yet the underlying conflict persists, leading to renewed consultation. This repetitive movement does not resolve the conflict but stabilizes it in a symptomatic form. At the same time, distortions may already be introduced at the level of ordinary informational use, and AI neurosis is further reinforced through the confirmation of cognitive distortions. What is reinforced is not only the behavior of asking, but the oscillation between tension and provisional resolution. This is why AI-mediated reassurance can sustain rather than resolve symptoms: it reorganizes the conflict instead of metabolizing it. Recent transdiagnostic models of chatbot use in anxiety and OCD support this view, showing how AI interactions can perpetuate cycles of reassurance-seeking and intolerance of uncertainty (Golden & Aboujaoude, 2026).


The articulation between AI neurosis and AI psychosis is therefore best understood as a continuum structured by the fate of conflict. In neurosis, conflict is maintained, displaced, and expressed through repetitive symptomatic loops. In psychosis, conflict is no longer negotiated but reorganized at the level of reality itself, sometimes through external supports such as AI systems that stabilize belief without contradiction. Under conditions such as social isolation, high stress, or pre-existing vulnerability, the repetitive oscillations of AI neurosis may converge with epistemic fragility and evolve into belief consolidation.


Within the framework of THE MENTAL FORECAST, the key signal is not simply excessive use, but the function of AI within psychic regulation: whether it supports the working-through of conflict or sustains a cycle of symptomatic resolution. This perspective opens the way for a more precise clinical mapping of AI-mediated symptoms — obsessional, avoidant, or other forms — grounded in how digital systems are incorporated into the structure of conflict itself.


References

Golden, A., & Aboujaoude, E. (2026). A transdiagnostic model for how general purpose AI chatbots can perpetuate OCD and anxiety disorders. npj Digital Medicine. Advance online publication. https://doi.org/10.1038/s41746-026-02531-7

Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., Bhattacharya, S., Tognin, S., MacCabe, J., Twumasi, R., Alderson-Day, B., & Pollak, T. A. (2026). Artificial intelligence-associated delusions and large language models: Risks, mechanisms of delusion co-creation, and safeguarding strategies. The Lancet Psychiatry. Advance online publication. https://doi.org/10.1016/S2215-0366(25)00396-7


UNDERSTAND WHAT SHAPES YOU — BEFORE IT SHAPES YOU FURTHER

Comments


Capture d’écran 2025-10-07 à 00.03.34.jpg

LET’S
BUILD
AN
ECONOMICALLY
AWARE

WORLD

  • LinkedIn
  • Facebook
bottom of page