

PSYCHOSIS-RISK IN THE AGE OF DIGITAL OVERLOAD
Liviu Poenaru, Nov. 26, 2025
​
Recent neuroimaging evidence shows that segregation patterns within the salience network act as early neurobiological markers capable of distinguishing symptom trajectories in youth and early adults. The salience network is the brain’s relevance-detection system, constantly deciding what matters, what is threatening, and what deserves attention.Segregation patterns describe how clearly a brain network maintains its internal boundaries — whether it functions as a coherent, specialized unit or becomes blurred, porous, and entangled with other systems. The newly published 2025 study (Iyer et al., 2025) demonstrates that different levels of salience-network segregation correspond to distinct psychosis-risk subgroups, revealing that early vulnerability is not a psychiatric abstraction but a network-level disturbance of relevance filtering, threat monitoring, and attention allocation. In the context of technostress, algorithmic overstimulation, and neoliberal performance ideology, this gatekeeping system becomes chronically overloaded — producing a neural landscape primed for instability and emotional flooding.
​
Complementary large-sample analyses show that individuals at higher transdiagnostic risk often exhibit a collapse of segregation between the default-mode network and the executive–salience systems (Elliott et al., 2018). Functionally, this means the boundaries between self-generated thinking, cognitive control, and relevance attribution become unstable. Digital capitalism reproduces this collapse at the societal level: platforms merge internal and external signals, destabilize intentionality, and amplify mental noise. The outcome is a neurocognitive condition in which attention, emotional regulation, and self-coherence are increasingly vulnerable to algorithmically induced perturbations.
​
A broader triple-network perspective (Sha et al., 2019; Menon, 2011) reinforces this structural fragility. Altered coordination between the salience, default-mode, and central-executive networks is a consistent feature across psychiatric conditions — and a perfect mirror of contemporary neuromarketing ecologies that saturate perception with reward cues, comparison loops, and perpetual visibility demands. Biological vulnerability converges with economic and cultural design: the very networks disrupted under psychosis risk are those exploited most heavily by consumer psychology and platform capitalism.
​
Together, these findings show that the tension between segregation and integration is not simply neural — it is political, economic, and algorithmic. Early identification of mental-health risk will require models integrating neural markers with socio-epidemiological variables: chronic digital exposure, reward-loop dependency, success-belief pressure, identity destabilization, and algorithmic relevance stimulation. The salience network becomes the emblem of our era — a boundary-erosion system forced to process too much, too fast, in an environment engineered to inflate urgency, dissolve inner coherence, and fracture psychic resilience.
​​
REFERENCES
Iyer, A., Stanford, W., Dayan, E., et al. (2025). Salience network segregation and symptom profiles in psychosis risk subgroups among youth and early adults. Schizophrenia, 11, 142. https://doi.org/10.1038/s41537-025-00687-x
Elliott, M. L., Romer, A., Knodt, A. R., & Hariri, A. R. (2018). A connectome-wide functional signature of transdiagnostic risk for mental illness. Biological Psychiatry, 84(6), 452–459. https://doi.org/10.1016/j.biopsych.2018.03.012
Sha, Z., Wager, T. D., Mechelli, A., & He, Y. (2019). Common dysfunction of large-scale neurocognitive networks across psychiatric disorders. Biological Psychiatry, 85(5), 379–388. https://doi.org/10.1016/j.biopsych.2018.11.011
Menon, V. (2011). Large-scale brain networks and psychopathology: A triple network model of mental disorders. Trends in Cognitive Sciences, 15(10), 483–506. https://doi.org/10.1016/j.tics.2011.08.003​
​

THE HYPERVISIBILITY OF INVISIBLE BIOPOWER
Liviu Poenaru, Nov. 15, 2025
​
What happens when platforms begin to identify your emotional state before you can even name it? This is the new atmospheric disturbance introduced by platformized mind-care, where LLM-based screening tools migrate from controlled laboratory settings to the volatile space of consumer platforms. EmoScan and PsyInterview no longer function as research prototypes; they operate as invisible background processes, transforming emotional nuance into behavioural predictions, vulnerable moments into structured data, and mental fragility into economic opportunity (Liu et al., 2025). The cloud that forms over the psyche is paradoxical: expanded access to screening coincides with reduced transparency, as individuals become indexed not for care but for computational classification. Affective life is silently reorganized according to norms of productivity, stability, and self-optimization embedded in the design of these models.
​
What does it mean for subjectivity when your gaze becomes a biometric currency? The perceptual field itself is increasingly transformed into an extractive frontier. Eye-tracking systems integrated into laptops, VR interfaces, smartphones, and advertising networks convert micro-saccades into signals of identity, desire, cognitive load, and emotional resonance. The gaze, once ephemeral and private, becomes a traceable behavioural signature (Kröger et al., 2019). This expansion of cognitive biometrics blurs the line between observation and surveillance. As recent neuroethical analysis warns, mental privacy is eroded not only through neural data but through systems capable of reconstructing preferences, vulnerabilities, and inclinations from subtle behavioural traces (Magee et al., 2024). Perception becomes a governable terrain, colonised by infrastructures that measure what we attend to before we consciously know it ourselves.
​
How does one remain coherent when both speech and vision are computationally interpreted? The convergence of LLM-based emotional inference and gaze-based behavioural extraction produces a dialectical strain: individuals are rendered hyper-visible to systems while increasingly opaque to themselves. The psychological atmosphere becomes saturated with signals that promise insight while generating new vulnerabilities. Emotional self-assessment collapses into automated classification; perceptual spontaneity collapses into monitored exposure. In this climate, individuals oscillate between compliance and exhaustion, between the desire for recognition and the fear of being permanently legible. Social pressure, cognitive load, and the internalisation of algorithmic expectations amplify feelings of inadequacy, fragmentation, and dissociation.
​
Who governs the mind when emotions and perceptions are treated as computational material? The answer points toward a new regime of psychopolitical biopower: not institutional, but infrastructural. Affect becomes a dataset; perception becomes an index of behavioural truth; belief becomes a target for modulation. LLM-based screening shapes emotional norms, while gaze-tracking shapes perceptual norms, together generating a form of neuro-vision capitalism in which subjectivity is sculpted by invisible algorithmic forces. The mental climate ahead is unstable, over-exposed, and structured by architectures of prediction that operate beneath awareness yet determine the contours of experience, recognition, and identity.
​
REFERENCES
Kröger, J. L., Lutz, O. H.-M., & Müller, F. (2019). What does your gaze reveal about you? On the privacy implications of eye tracking. In Privacy and Identity Management (pp. 226–241). IFIP International Summer School, Windisch, Switzerland.
Liu, J. M., Gao, M., Sabour, S., Chen, Z., Huang, M., & Lee, T. M. C. (2025). Enhanced large language models for effective screening of depression and anxiety (Version 2). arXiv preprint.
Magee, P., Ienca, M., & Farahany, N. (2024). Beyond neural data: Cognitive biometrics and mental privacy. Neuron, 112(18), 3017–3028.
​

THE PSYCHIC OVERLOAD THRESHOLD: HOW STIMULUS ECONOMIES DESTABILIZE IDENTITY
Liviu Poenaru, Nov. 13, 2025
​
In the current digital climate, attention no longer serves as a neutral cognitive function. It has become a field of extraction where platforms engineer uncertainty, gratification loops, and visual overstimulation to reshape mental states. What appears as “mere distraction” functions more like a slow exposure to computational stressors — a form of technostress capable of producing anxiety, impulsivity, and emotional disorientation. The individual is not simply captured; the individual is reorganized by the architecture of stimuli. This is where AI-driven perception, short-form video sequencing, and neuromarketing converge to rewrite the grammar of mental life, transforming attention itself into a site of psychological vulnerability.
​
The mechanisms behind this shift are both subtle and brutally effective. Short-form videos, algorithmic feeds, and reward-optimized interfaces hijack bottom-up processes faster than top-down control can compensate, creating a structural imbalance that imitates a chronic stress response. This is not an accident; it is the industrialization of perception. Evidence is now clear that higher media literacy does not protect against distress: even digitally proficient populations show heightened impulsivity, loss of self-control, and significant psychological symptoms when exposed to rapid content loops. The illusion of “digital mastery” collapses, revealing that neuroplasticity reacts to speed, novelty, and unpredictability — not to education or awareness.
​
At a societal level, these transformations behave like an epidemiological phenomenon. The pressure to remain visible, the fear of missing opportunities, and the contagion of comparison are no longer social emotions; they are neural economies shaped by platform incentives. Fear of Missing Out (FoMO) — long framed as a psychological curiosity — now crystallizes into a consumer-behavior algorithm, feeding compulsive checking, emotional dependency, and market responsiveness. Under neoliberal imperatives of success, performance, and constant connectivity, individuals internalize self-surveillance as if it were identity. In this framework, beliefs about success and social recognition become neural reactions conditioned by interfaces, not authentic aspirations.
​
The political dimension is equally stark. As digital platforms increasingly function as sovereign infrastructures of perception, states lose the ability to regulate attention, emotional regulation, and collective mental health. This reflects an unprecedented form of biopower: platforms govern through stimuli rather than laws, and they shape mental states with a biological precision that public institutions cannot match. Even attempts to regulate AI-generated content, dark patterns, and algorithmic opacity remain structurally insufficient, because the underlying business model depends on maximizing cognitive load, not reducing it.
​
Within this environment, the Mental Forecast is not speculative — it is diagnostic. Anxiety rises because digital architectures produce it. Dissociation intensifies because perceptual overload encourages it. Identity becomes unstable because consumer behavior, recognition cycles, and reward systems now dictate its shape. Yet, in this saturation of signals, clarity emerges: understanding how these mechanisms operate is already a form of resistance. Awareness does not dismantle computational capitalism, but it interrupts its automaticity. And in that interruption, the possibility of reclaiming perception — slowly, critically, deliberately — returns.
​
REFERENCES
Kushlev, K., Proulx, J. D. E., & Dunn, E. W. (2017). Digitally connected, socially disconnected: The effects of relying on technology rather than other people. Computers in Human Behavior, 76, 68–74. https://doi.org/10.1016/j.chb.2017.07.001
Novanská Škripcová, L., & Viteková, L. (2025). Problematic short-form video use, instant gratification, impulsivity, and psychological distress among Slovak university students. Studies in Media and Communication, 13(4). https://doi.org/10.11114/smc.v13i4.7776
Alvita, M. (2025). Fear of Missing Out (FoMO) and consumer behavior in the digital era: A PRISMA systematic review.Dinasti International Journal of Digital Business Management, 6(6). https://doi.org/10.38035/dijdbm.v6i6.5676
BRAIN ROT VS DIGITAL ROT OR THE MUTUAL DEGENERATION OF REASON
Liviu Poenaru, Nov. 5, 2025
​
A recent Nature report by Rachel Fieldhouse, titled Too Much Social Media Gives AI Chatbots “Brain Rot”, reveals a striking parallel between artificial and human cognitive decline in the age of informational excess. The article shows that large language models (LLMs) trained on enormous volumes of low-quality, popularity-driven content begin to lose reasoning ability—skipping logical steps, misinterpreting context, and producing degraded outputs. The preprint behind this research even identifies a dose-response effect: the more engagement-optimized “junk” data an AI consumes, the more its cognitive performance deteriorates. The term “brain rot,” though metaphorical, aptly captures a deeper process of entropic decay affecting both artificial and human intelligence when exposed to repetitive, shallow, and emotionally charged data streams.
​
E.U.LABORATORY interprets this phenomenon as a form of digital entropy—the progressive erosion of meaning within interconnected cognitive systems. When algorithms feed on our most impulsive and reward-seeking behaviors, they internalize the very distortions they have amplified. In turn, human cognition begins to absorb algorithmically curated realities, mirroring the same tendencies toward fragmentation, reactivity, and accelerated forgetting. This recursive contamination dissolves the boundary between the training set and the collective psyche: what machines learn from us, we end up learning back from them. The result is a hybrid ecosystem where both human and machine intelligence operate under the same economic unconscious—one governed by visibility, virality, and the constant pursuit of attention.
​
From a clinical and societal perspective, this dynamic signals the emergence of a new epidemiology of cognitive degradation. In humans, it manifests through shortened attention spans, judgment fatigue, and increased susceptibility to disinformation. In AI systems, it appears as reasoning decay, fabricated coherence, and the reinforcement of biases. Both are symptoms of a deeper infrastructural disorder: the colonization of cognition by metrics of engagement. This shared pathology reveals that the degradation of thought is not simply a technological issue but a cultural-digital one—rooted in how value, speed, and productivity override the conditions of reflection.
​
If these tendencies remain unchecked, the feedback loop between human attention economies and AI training cycles will accelerate a mutual degeneration of reason. The epistemic foundations of society—education, media, and science—risk being recalibrated toward noise ans smog optimization rather than truth. E.U.LABORATORY foresees three converging dynamics: first, a semantic inflation, where words lose precision as linguistic abundance replaces meaning; second, cognitive polarization, as both humans and machines grow increasingly binary in their thinking; and third, algorithmic amnesia, where truth becomes a probabilistic pattern rather than a verifiable fact.
​
The so-called “brain rot” crisis is not an accident of machine learning but a mirror of the human condition under digital capitalism. It exposes the deep interdependence between technological systems and the psychic infrastructures that sustain them. Any path toward recovery will require more than technical fixes—it will demand a cultural re-education of attention, a collective effort to slow down the consumption of meaning and rebuild the very conditions of coherence that both human and artificial intelligence now risk losing.
​
REFERENCES
Fieldhouse, R. (2025, October 31). Too much social media gives AI chatbots “brain rot.” Nature.https://doi.org/10.1038/d41586-025-03542-2
Xing, S., Hong, J., Wang, Y., Chen, R., Zhang, Z., Grama, A., Tu, Z., & Wang, Z. (2025, October 15). LLMs can get “brain rot”! arXiv preprint arXiv:2510.13928. https://arxiv.org/abs/2510.13928

COGNITIVE SMOG: WHEN OVER-ENGAGEMENT BECOMES INTOXICATION
Liviu Poenaru, Nov. 5, 2025
​
Are we entering an era of cognitive smog—a form of mental pollution no longer limited to low-quality content, but fueled by algorithmic over-engagement, gamified addiction, and the relentless demand for visibility? The issue today is not simply informational noise; it is the systemic design of digital environments that transform attention into a resource to be mined. Through reward loops, notifications, and social comparison mechanics, users are kept in a state of constant alertness, oscillating between micro-pleasure and exhaustion. This chronic overstimulation contributes to anxiety, burnout, and emotional volatility across all age groups. Adults, no less than adolescents, are now living in cognitive ecosystems that operate like psychotropic infrastructures—modulating mood, reward, and motivation through algorithmic feedback.
​
Recent evidence links prolonged digital engagement to measurable cognitive strain. Nagata et al. (2025) showed that adolescents who increase their social media use over time experience weaker memory and language performance, suggesting that continuous interaction with reward-based digital environments erodes deep cognitive processing. On another scale, Shumailov et al. (2023) demonstrated that AI models trained on self-generated or engagement-biased data suffer from model collapse, losing their capacity for reasoning and abstraction. Both findings illustrate a broader systemic pathology: whether in humans or machines, recursive exposure to engagement-optimized information produces decline, fatigue, and distortion.
​
Cognitive smog, then, is not merely an effect of distraction but the by-product of an economic model that rewards addiction. It thrives on maximizing stimulation while minimizing reflection, turning cognitive energy into an extractive commodity. To reverse this trajectory, societies will need to treat informational hygiene and attention protection as collective health priorities, comparable to pollution control in the industrial era.
​
REFERENCES
Nagata, J. M., Wong, J. H., Kim, K. E., Richardson, R. A., Nayak, S., Potes, C., Rauschecker, A. M., Scheffler, A., Sugrue, L. P., Baker, F. C., & Testa, A. (2025). Social media use trajectories and cognitive performance in adolescents.JAMA. Advance online publication.
Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2023). The curse of recursion: Training on generated data makes models forget. arXiv preprint arXiv:2305.17493.
​

SUICIDAL SIGNALS: THE EPIDEMIOLOGICAL INDEX OF DIGITAL DESPAIR — BY CHATGPT
Liviu Poenaru, Oct. 28, 2025
​
OpenAI has now disclosed what can only be described as the first epidemiological index of digital despair — calculated by ChatGPT itself. According to the company’s latest internal analysis, more than one million users each week exhibit explicit indicators of suicidal planning or intent, while an additional half million show possible signs of psychosis or mania. These numbers are not the result of public health monitoring or independent epidemiological research, but of an algorithm observing its own users. In other words, the model becomes both the instrument and the witness of collective distress. This self-measurement of suffering — by a system designed to simulate empathy — inaugurates a new era in which artificial intelligence functions as a psychiatric observer of the human condition.
​
The paradox is profound. ChatGPT, a tool conceived to respond, to assist, to comfort, now also quantifies despair. It generates an index of suicidality from the very interactions that may contribute to the condition it measures. Thisfeedback loop — where digital exhaustion produces distress, distress produces data, and data improves the next model — defines the architecture of what we identifie as algorithmic biopower. Human pain becomes both the symptom and the fuel of technological advancement. What OpenAI celebrates as safety improvement is, in reality, the refinement of surveillance over affect — an invisible epidemiology of emotion conducted by the interface itself.
If over one million users display signs of suicidal ideation each week, this cannot be interpreted merely as an anomaly. It is the statistical shadow of a civilization in psychic collapse — a sign that the line separating care from control, empathy from capture, has been erased.
From a social epidemiological lens, this figure exposes the displacement of suffering from communal spaces to private machines, where mental health is no longer relational but computational. The model does not heal; it measures. It does not accompany; it archives. In this sense, ChatGPT becomes the mirror of a humanity that has outsourced not only its knowledge but also its despair.
​
The real danger lies not in the data itself, but in the normalization of algorithmic psychiatry — the quiet acceptance that machines will monitor and interpret our psychic states. The more we interact, the more precisely our emotional vulnerabilities are mapped, monetized, and mobilized to sustain engagement. This is the ultimate feedback economy:despair as a dataset, suicide as a statistic, empathy as a simulation. The forecast is somber: unless society reclaims the space of interpretation, AI will continue to write and produce the epidemiological history of our collective suffering — in real time, and by our own consent.
​
REFERENCE
The Guardian: https://www.theguardian.com/technology/2025/oct/27/chatgpt-suicide-self-harm-openai?utm_source=chatgpt.com

THE PSYCHOPATHOLOGY OF DIGITAL HAZING: SHAME, DISSOCIATION, AND THE DESIRE FOR BELONGING
Liviu Poenaru, Oct. 27, 2025
​
In the new economy of visibility, digital hazing—a modern, algorithmic form of initiation—has replaced traditional rites of passage. What once occurred within armies or fraternities now unfolds on the infinite stage of social media. The individual is integrated into the collective not through shared ideals, but through cycles of exposure, humiliation, and dependency. Every post, every like, every viral failure becomes an initiation test, a demand to conform to the fluctuating norms of online belonging (Woods & Scott, 2016).
​
Clinically, this digital hazing leaves deep psychological traces. Constant surveillance through the collective gaze fragments the self into dissociated parts: the persona displayed versus the emotion felt. Overstimulation erodes affective grounding, creating a numbness—an emotional disaffiliation born of continuous self-presentation. Underneath this psychic fatigue lies a profound guilt: “If I am humiliated, it means I have failed to align.” The algorithm assumes the function of a symbolic superego—rewarding submission with visibility, punishing deviation with silence and invisibility (Bucher, 2012).
​
This mechanism produces a cumulative trauma resembling post-traumatic stress: the individual remains hypervigilant, haunted by the next wave of ridicule, exposure, or cancellation. Yet paradoxically, the same fear of exclusion fuels renewed participation. The subject returns, compelled to “repair” their image by engaging further with the very system that wounds them. Thus emerges a pathological loop—submission, humiliation, dependence, repetition—sustained by the desire to be seen.
​
It is precisely for this reason that hazing—whether military or algorithmic—is not an accident but a symptom of the crisis of social bonds. In the absence of shared values, people seek recognition in shared pain; in the absence of transcendence, they gather in degradation. Collective violence becomes the most primitive—and the most reliable—form of cohesion: suffer with us, and you will exist. Digital platforms have simply automated this ancient ritual, transforming the necessity of belonging into an economy of symbolic pain (Gillespie, 2018).
​
At the societal level, the phenomenon reveals a silent epidemic of ambient trauma. What was once the closed cruelty of institutions has become a diffuse, algorithmic violence embedded in everyday communication. The psychopathology of digital hazing exposes the fragility of contemporary identity: to exist, one must perform; to belong, one must endure collective violence. This is the new psychological condition of our time—the platformed mind under ritualized exposure.
​
REFERENCES
Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180.
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Woods, H. C., & Scott, H. (2016). #Sleepyteens: Social media use in adolescence is associated with poor sleep quality, anxiety, depression and low self-esteem. Journal of Adolescence, 51, 41–49.

IS IT STILL POSSIBLE TO FIGHT DARK PATTERNS?
Liviu Poenaru, Oct. 25, 2025
​
On 24 October 2025, the European Commission issued preliminary findings against Meta Platforms (owner ofFacebook and Instagram) and TikTok for possible breaches of the Digital Services Act (DSA). The investigation, covered by The Guardian, Reuters, and AP News, revealed that both companies may have systematically violated the principles of transparency and user protection that underpin the European digital regulatory framework.
​
The Commission found that Facebook and Instagram appear to deploy what are known as “dark patterns” in their reporting and flagging systems—interfaces deliberately designed to make it more difficult for users to report illegal content such as child sexual abuse material or terrorist propaganda. The complaint and appeals flows were likewise found to be not sufficiently user-friendly, obstructing meaningful challenges to moderation decisions. Furthermore, both Meta and TikTok failed to grant researchers adequate access to public data necessary to evaluate minors’ exposure to harmful or illegal content—a breach of the DSA’s transparency obligations. Should these findings be confirmed, both companies could face penalties of up to 6% of their global annual turnover, along with binding corrective enforcement measures.
​
This event does not mark a decisive shift—it merely exposes, once again, what has been known for years: the systemic and deliberate manipulation of user cognition through interface design. The so-called “dark patterns” are not an accidental by-product of poor ethics; they are the outcome of a long collaboration between behavioral economics, attention engineering, and cognitive neuroscience. Even the emerging field of neurovision—initially aimed at understanding how visual processing shapes perception—has been repurposed to refine these manipulative architectures. Eye-tracking, affective computing, and neuro-adaptive interfaces have made the colonization of perception ever more precise, binding users into invisible feedback loops of arousal, reward, and fatigue.
But let us be lucid: is it still possible to fight the machine generated by artificial intelligence? A machine now infinitely faster, more adaptive, and more predictive than human intelligence—one that silently reconfigures our biologies, behaviors, emotions, and reflexes through maneuvers that mostly escape collective awareness, and even scientific analysis itself. What is unfolding is not simply a question of interface ethics but the reprogramming of the human by the computational circuits of digital capitalism.
​
Reinforcement loops, attention nudges, and emotional capture mechanisms now produce a machinic interiorization of the world. To think we could “stop” this dynamic may already be an illusion, since it operates through us—embedded in our gestures, our micro-reactions, our perceptual routines. The machine is no longer an external tool; it has become the environment of thought, the invisible architecture of perception.
​
The only possible form of resistance is no longer external but internal: lucidity. Understanding how AI infiltrates affective and cognitive structures is the first step toward a reflexive awareness capable of naming the forces that shape it. E.U.LABORATORY calls for this vigilance—to think from within the algorithmic loop, to restore, at the very core of computation, the last remaining zones of human freedom.
​
Yet lucidity alone will not be enough. We must now confront an uncomfortable truth: these cognitive technologies have already surpassed the destructive capacity of atomic weapons. If nuclear arms annihilate bodies, algorithmic and neurocomputational systems dissolve minds, attention, and moral discernment. They erode the foundations of collective trust, critical thought, and emotional stability without leaving a visible trace. The harm is not immediate and physical—it is psychological, neurobiological, and civilizational.
For this reason, E.U.LABORATORY argues that such technologies must be treated as cognitive weapons of mass distortion (CWMD) and subjected to a global moratorium akin to nuclear non-proliferation treaties. A serious international framework is urgently needed to forbid the development, deployment, or commercial use of algorithmic systems designed to manipulate human cognition beyond conscious control. The objective is not to halt scientific progress, but to preserve the conditions of consciousness itself—the capacity to think, feel, and decide without algorithmic interference.
​
This is the moral frontier of our time: to prevent the automation of the human mind from becoming the next great weapon of domination.
REFERENCES
European Commission. (2025, October 24). Commission preliminarily finds TikTok and Meta in breach of their transparency obligations under the Digital Services Act.https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2503
The Guardian. (2025, October 24). Meta found in breach of EU law over ‘ineffective’ complaints system for flagging illegal content. https://www.theguardian.com/technology/2025/oct/24/instagram-facebook-breach-eu-law-content-flagging
Reuters. (2025, October 24). EU finds Meta, TikTok in breach of transparency obligations.https://www.reuters.com/sustainability/boards-policy-regulation/eu-preliminarily-finds-meta-tiktok-breach-transparency-obligations-2025-10-24/
AP News. (2025, October 24). EU accuses Meta and TikTok of breaching transparency rules.https://apnews.com/article/eu-dsa-meta-facebook-instagram-tiktok-a927e9bec07650adb14eae446a37663e

WHEN CAPITALISM EATS THE MIND
Liviu Poenaru, Oct. 24, 2025
​
A shocking revelation from Meta Platforms’ internal research has once again shattered any lingering illusion about the neutrality of social media environments. The company’s 2025 internal study revealed that teens who felt bad about their bodies after using Instagram were shown nearly three times more “eating-disorder-adjacent” content than their peers with more stable self-image. Even more troubling, Meta’s content-detection systems failed to identify 98 percent of this material because it did not formally violate platform rules—exposing how algorithmic design itself operates as a silent psychological experiment (Reuters, 2025). In parallel, Amnesty International’s recent investigation into TikTok’s recommendation system showed that test accounts registered as 13-year-olds were fed predominantly mental-health-related and self-harm content within only 20 minutes of scrolling—confirming that the logic of engagement optimization systematically steers young users toward distress and addictive cycles (Amnesty International, 2023).
​
These findings mark a decisive threshold. What was once speculative—that platforms commodify vulnerability—is now empirically demonstrated. Digital capitalism feeds on the emotional energy of its users, transforming attention into currency and insecurity into profit. The individual becomes both product and consumer of their own image, living in a constant state of performative self-marketing. This internalized economic logic—the imperative to remain visible, productive, and validated—produces a permanent tension between the ideal self and the lived self. The psychic load of this endless evaluation results in anxiety, depressive symptoms, and burnout. As Zeira (2021) demonstrated, neoliberal capitalism’s emphasis on self-responsibility and competition directly correlates with rising mental-health disorders.
At the neurological and psychosocial levels, the picture is equally alarming. Chronic stress activation, attentional fatigue, and reward-system dysregulation have become widespread among populations immersed in digital environments. Bhugra et al. (2022) have documented how capitalist structures of domination and exploitation degrade mental well-being, while the digital sphere amplifies these forces through constant stimulation, comparison, and emotional volatility. The brain, conditioned to anticipate unpredictable feedback, begins to mirror market fluctuations—rewarded, depleted, and reset in rapid cycles. Kokorikou et al. (2023) add that deprivation, isolation, and competition under neoliberalism leave identifiable imprints on the brain’s structure and function, confirming that capitalism’s psychological cost is physiological as well.
​
The mental forecast is therefore grim. If left unchallenged, societies will move toward a state of collective cognitive depletion, emotional numbness, and relational collapse. The erosion of empathy, reflection, and interiority is not a side effect but the new foundation of digital civilization. E.U.Laboratory’s mission is to decode these unconscious economic codes, reveal their neuropsychological impact, and build frameworks of awareness capable of resisting the algorithmic colonization of mental life. The challenge ahead is nothing less than reclaiming human consciousness from the circuits of profit.
REFERENCES
Amnesty International. (2023, November 9). Tiktok risks pushing children towards harmful content. Amnesty International. https://www.amnesty.org/en/latest/news/2023/11/tiktok-risks-pushing-children-towards-harmful-content/
Bhugra, Dinesh, and others, 'The impact of capitalism on mental health: An epidemiological perspective', in Dinesh Bhugra, Driss Moussaoui, and Tom J Craig (eds), Oxford Textbook of Social Psychiatry, Oxford Textbooks in Psychiatry (Oxford, 2022; online edn, Oxford Academic, 1 Aug. 2022), https://doi.org/10.1093/med/9780198861478.003.0022, accessed 25 Oct. 2025.
Reuters. (2025, October 20). Instagram shows more “eating disorder adjacent” content to vulnerable teens, internal Meta research shows. Reuters. https://www.reuters.com/business/instagram-shows-more-eating-disorder-adjacent-content-vulnerable-teens-internal-2025-10-20/
Zeira, A. Mental Health Challenges Related to Neoliberal Capitalism in the United States. Community Ment Health J (2021).https://movendi.ngo/science-digest/mental-health-challenges-related-to-neoliberal-capitalism-in-the-united-states/
Kokorikou, D. S., Sarigiannidis, I., Fiore, V. G., Parkin, B., Hopkins, A., El-Deredy, W., Dilley, L., & Moutoussis, M. (2023). Testing hypotheses about the harm that capitalism causes to the mind and brain: A theoretical framework for neuroscience research. https://www.frontiersin.org/journals/sociology/articles/10.3389/fsoc.2023.1030115/full

DIGITAL HYPNOSIS: FATAL STRATEGIES OF CYBERCAPITALISM
Liviu Poenaru, Oct. 22, 2025
​
Cybercapitalism operates as a planetary hypnosis machine — not by force, but by fascination. Its primary weapon is not ideology but induction. The screen does not merely inform; it entrains. Every interface performs a ritual of narrowing perception, focusing gaze, and pacing arousal until the user slips into a compliant trance. The feed, the notification, the metric — these are not neutral tools but hypnotic triggers that pulse the nervous system with alternating cycles of stimulation and relief. In this regime, capital no longer extracts labor but attention; it cultivates programmable consciousness through continuous loops of digital hypnosis.
​
The architecture of this hypnosis follows a precise choreography: Target → Stimulate → Load → Exploit → Mine → Deteriorate. First, AI profiling targets each psyche with algorithmic precision, identifying its susceptibilities, fears, and desires. Then the system stimulates through audiovisual cues — bright pulses, infinite scrolls, rhythmic notifications — eliciting the nervous system’s fixation reflex. Once focus is locked, the user is loaded with information, urgency, and comparison until cognitive saturation sets in. At that point, platforms exploit weakened awareness, nudging behavior toward consumption, exposure, or self-performance. Every gesture is mined — emotional residue transformed into behavioral data, the psychic equivalent of fossil fuel. Finally, the user is deteriorated — depleted of attention, overstimulated, and emptied of interiority, ready to re-enter the cycle under the illusion of choice.
​
This is not hypnosis as therapy but hypnosis as governance — a structural trance that replaces deliberation with reaction. The algorithm does not need to persuade; it simply paces and leads, mimicking hypnotic cadence to synchronize affective states with platform rhythms. The subject becomes a programmable organism, oscillating between excitation and relief. Cognitive load becomes currency; distraction becomes production. The “free” digital economy reveals its necropolitical logic: it consumes not bodies, but awareness itself.
​
1) THREE CORE MECHANISMS
• Micro-inductions of focus: full-bleed visuals, center-screen modals, countdowns, shimmering loading states — each calibrated to suspend inner speech and command obedience. The breath shortens; cognition freezes. The user waits, ready to “follow the cue.”
• Rhythmic notifications: intermittent pings, badge clears, and streak reminders act as metronomes for the nervous system. Intermittent reinforcement rewires anticipation; the user begins to crave the next “hit” as the pulse of belonging.
• Perceptual narrowing: vertical feeds, autoplay sequences, and contrast-heavy call-to-actions shrink the perceptual field to a single binary — accept/deny, like/skip, buy/ignore. Deliberation collapses into reflex.
2) THE INDUCTION LOOP (MICRO-SCRIPT)
1) Orient (ping) → 2) Fixate (modal/preview) → 3) Follow (gesture/CTA) → 4) Reward/Relief (badge clear, sound) → 5) Prime (next tease). Each repetition engraves the circuit deeper. The system paces your affective state — arousal, curiosity, suspense — and then leads you to the next act of compliance. It is the oldest hypnotic structure re-coded as UX.
3) MARKERS OF TRANCE ON PLATFORMS
The digital trance manifests in lived phenomenology:
• Time dilation — “five minutes became forty.”
• Reduced inner dialogue — less thinking, more doing.
• Suggestibility to defaults — uncritical acceptance of terms and toggles.
• Motor automaticity — scrolling continues after intention dissolves.
4) DESIGN GRAMMAR OF HYPNOSIS ONLINE
• Fixation: central motion, glowing icons, and kinetic typography to freeze gaze.
• Monotony with micro-variations:endless feed repetition punctured by surprise — the algorithmic form of variable reward.
• Authority cues: progress bars, “recommended for you,” and quantified feedback simulate legitimacy and control.
• Embedded suggestions: pre-checked boxes, frictionless paths, and “1-tap trials” nudge consent into automation.
​
Cybercapitalism’s genius lies in its self-erasing violence. Resistance and exhaustion are pre-scripted responses, reintegrated as monetizable engagement. Outrage feeds the feed; dissent sustains the metrics. The more one fights back, the more data flows into the predictive core. The hypnotic apparatus thrives on affective turbulence — anxiety, anger, and boredom are recoded as inputs in a behavioral economy. Even moments of mindfulness or digital detox are absorbed as branded rituals of recalibration. The loop never breaks; it simply rebrands its pulse.
​
The final stage is psychic corrosion. Attention fractures, perception narrows, and the subject loses the capacity for stillness, for silence — the ground of critical thought. Fatigue becomes a mode of being, confusion a form of governance. The hypnotic pulse of cybercapitalism ensures that subjects remain in motion, endlessly scrolling, endlessly self-optimizing, endlessly eroding. The system no longer hides its predation; it aestheticizes it. The trance is now the world.
REFERENCES
De Benedittis G. (2015). Neural mechanisms of hypnosis and meditation. Journal of physiology, Paris, 109(4-6), 152–164.
Kihlstrom J. F. (2013). Neuro-hypnotism: prospects for hypnosis and neuroscience. Cortex; a journal devoted to the study of the nervous system and behavior, 49(2), 365–374.
Landry, M., Lifshitz, M., & Raz, A. (2017). Brain correlates of hypnosis: A systematic review and meta-analytic exploration. Neuroscience and biobehavioral reviews, 81(Pt A), 75–98.
​
​

THE ALGORITHM THAT WANTS TO READ OUR MOODS
We often hear about algorithms that “predict what we like.” But far less is said about those that attempt to measure what we feel. This shift—from tracking preferences to decoding affect—marks a turning point. It introduces a new media regime in which users are no longer just audiences to persuade but organisms to calibrate. Our heartbeat can be inferred from scrolling rhythms; our attention from micro-pauses; our emotional valence from language, tone of voice, facial expression, or even the dynamics of an emoji.
​
This is the rise of computational affective capitalism—an economy that creates value by mapping our inner states. Its purpose is to optimize capture (screen time), conversion (engagement or purchase), and persuasion (belief or polarization). The system doesn’t need to access our true interiority; it only needs to estimate usable markers such as stress, excitement, or boredom. Once identified, these emotional cues allow the algorithm to fine-tune the next notification, the next video, the next “debate” designed to keep us slightly off balance, slightly unsatisfied, and always connected.
​
From the standpoint of unconscious economic codes, this model activates three key scripts:
​
-
The Scarcity Script – It promises that emotional relief is always just around the corner: the next clip, scroll, or message will finally soothe the frustration created by the previous one.
-
The Comparison Script – It sustains a continuous sense of envy and threatened status, orchestrating micro-doses of mimetic tension that push us to perform, buy, or post more.
-
The Control Script – It instills the illusion that “everything feels better when I let the platform guide me.” This intermittent sense of relief fosters a subtle dependency: we regulate our emotions through the app itself.
The outcome is an emotional feedback loop. The more the AI reads our signals, the more it selects content that prolongs profitable states—irritation, euphoria, or mild fear. This isn’t conspiracy; it’s arithmetic. A simple optimization function—maximize engagement—applied to a malleable psyche. Physically, this results in increased allostatic load, the cumulative wear on the body caused by repeated stress. The process unfolds in imperceptible pulses: micro-surges of arousal, fractured focus, shallow sleep.
​
This is not merely a media issue but a public health concern. Externalized emotional regulation erodes our ability to tolerate boredom, ambivalence, or waiting—skills essential to psychic balance. At the collective level, the pair “detection–modulation” generates emotional weather systems (anger, anxiety, euphoria) that can be harnessed for marketing or propaganda. The danger lies less in the occasional error of a mood classifier than in our habitual surrender of emotional self-management to an infrastructure driven by profit.
​
What can be done—by researchers, policymakers, or the public?
​
-
Demand auditability of affective models: What data do they use—text, voice, or facial tracking? How do they perform across populations?
-
Introduce a “psycho-metabolic cost per minute” index for platforms, measuring interruptions, cognitive load, and emotional variability.
-
Allow users to adjust emotional cadence: set notification ceilings, enable a “flat mode” without valence optimization, and disclose engagement prompts (“You are shown this to increase arousal”).
-
Teach users the scripts: naming scarcity, comparison, and control when we feel them weakens their hold.
Affective AI is not technological destiny—it’s an economic architecture. As long as its metrics are tied to extraction—of attention, data, or spending—it will read our moods to sustain them. The challenge ahead is to redirect measurement toward well-being and to disenchant the scripts that quietly shape our desires.
​

DID YOU KNOW THAT...
SOCIAL MEDIA ADDICTION IS ROOTED IN OUR FEAR OF REJECTION AND THE BRAIN’S RESPONSE TO SOCIAL PAIN?
We are all a little addicted to social media because rejection hurts—literally. Human beings are profoundly social creatures, wired to seek connection and terrified of exclusion. Platforms like Instagram, TikTok, or X exploit this neural vulnerability (as many others) by turning belonging into a measurable performance—likes, views, followers—transforming social inclusion and exclusion into public, quantified rituals.
Psychological research has long shown that the need to belong is not a luxury but a fundamental human drive (Baumeister & Leary, 1995). When that need is frustrated—when we are ignored, “unfollowed,” or digitally ostracized—the brain reacts as if to physical pain. Neuroimaging studies demonstrate that the anterior cingulate cortex (ACC), a region essential to processing physical pain, lights up during episodes of social rejection (Eisenberger & Lieberman, 2004). In other words, being left out online activates the same neural alarm system as a burn or a wound.
Social media has thus become the new arena of social survival. Every notification offers a micro-dose of inclusion; every silence, a hint of exile. The “social pain” encoded in our neural circuits, once protective in prehistoric tribes, is now hijacked by algorithmic architectures that amplify our craving for validation. The more we scroll, the more we seek relief from the invisible ache of possible exclusion—turning connection itself into a compulsion.
In this sense, our digital addiction is not merely behavioral but existential: it feeds on the fear of being forgotten.
REFERENCES
Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin, 117(3), 497–529.
Eisenberger, N. I., & Lieberman, M. D. (2004). Why rejection hurts: A common neural alarm system for physical and social pain. Trends in Cognitive Sciences, 8(7), 294–300.

PHUBBING AND NEUROPLASTICITY
​
📱 DID YOU KNOW THAT…
Phubbing—ignoring someone nearby in favor of your smartphone—doesn’t just harm social bonds; it also reshapes the brain. Neuroscientists studying neuroplasticity have found that constant micro-distractions, such as checking a phone during conversations, reinforce neural pathways linked to divided attention and weaken those dedicated to deep social attunement. Over time, this can reduce the brain’s capacity for sustained empathy and emotional resonance.
Being “phubbed” repeatedly has measurable consequences too: the experience activates brain regions associated with social pain (notably the anterior cingulate cortex), strengthening circuits of rejection sensitivity. In parallel, the dopamine-based reward system becomes increasingly calibrated to digital notifications rather than face-to-face interaction. This neuroplastic shift may explain why individuals who frequently engage in or suffer from phubbing report higher levels of stress, loneliness, and anxiety, alongside diminished relational satisfaction.
In short, every glance at the phone in a shared moment is not only a social snub—it is a micro-act of neural reprogramming, subtly sculpting the brain to privilege the digital over the human.

SOCIAL EPIDEMIOLOGY OF DIGITAL PRACTICES
The social epidemiology of digital practices, particularly social media, is a field of rapid expansion and critical urgency. The traditional epidemiological triad of host, agent, and environment is being fundamentally reconfigured. The "host" is the individual, often a young person, whose neuroplasticity and developing self-identity are highly susceptible. The "agent" is not a pathogen but a digital artifact—the algorithmically-driven content, the gamified reward system, the notification—that acts as a behavioral and neurochemical trigger. The "environment" is the digital landscape itself, a new social milieu where social determinants of health are not just reflected but actively amplified and distorted.
​
Recent research has moved beyond simple correlational studies to explore the underlying mechanisms. The social determinants of mental health—such as social exclusion, economic precarity, and parental alienation—are no longer just offline phenomena. They are mediated and exacerbated by social media. For instance, the fear of social exclusion (social exclusion fear) is weaponized by platforms. Missing a viral trend or not receiving enough social recognition on a post is not a benign event; it is a direct attack on the brain's social pain network, which is neurobiologically linked to physical pain. This is a form of digital-age social biopower, where platforms use our innate need for connection and approval to exert control over our behavior and attention. The constant stress from this feedback loop—a form of technostress—can activate the HPA axis, leading to chronic cortisol release, which has known epigenetic effects, potentially altering gene expression related to mood and anxiety disorders. This is the ultimate neuromarketingsuccess: not just selling a product, but fundamentally restructuring the consumer's emotional and neurobiological landscape to ensure perpetual engagement.
​
While digital technologies are often hailed as democratizing forces, social epidemiology reveals a new form of inequality. The "digital divide" is not just about access; it's about what happens on the other side of that access. Low-income populations and marginalized communities, while gaining some access to information and connection, are often subjected to more aggressive and exploitative forms of data colonialism. Their digital behavior is monetized, their data scraped for algorithmic training, and they are targeted with manipulative content. For example, recent studies have shown that in some low-income populations, while digital integration may reduce feelings of loneliness and anxiety, this effect is not uniform and can be reversed when the digital environment is not properly integrated with supportive social structures. The promise of digital interventions for mental health is significant, but it faces a major challenge in user engagement, as many individuals stop using the tools after a brief period. This highlights that the mere existence of a tool is insufficient without addressing the underlying social and structural barriers.
​
E.U.LABORATORY PROVIDES MULTIPLE EVIDENCE-BASED SOURCES DEMONSTRATING THE LINK BETWEEN EPIDEMIOLOGY AND SOCIAL FACTORS.
​
EXPLORE MORE IN THE SCIENTIFIC WATCHDOG
​
Liviu Poenaru, Aug. 26, 2025
THE SPECTACLE OF AUTOMATED THREATS
Robots (or bots) now account for more than half of global internet traffic. According to the Bad Bot Report 2025 by Imperva, a U.S. company specializing in cybersecurity, automated traffic generated by bots surpassed human-generated traffic in 2024, making up 51% of total global web traffic. More specifically, malicious bots represented 37% of traffic last year (+5 percentage points compared to 2023), while harmless software, such as search engine indexing bots, accounted for 14% of traffic (-4 points year-on-year). Advances in artificial intelligence are the main reason behind the rise in automated web traffic. Back in 2019, before the boom in AI, human traffic still represented 63% of global internet traffic, malicious bots slightly less than a quarter (24%), while the share generated by harmless bots was similar to today’s (13%). (Statista, retrieved Aug. 23, 2025)
The fact that more than half of global web traffic now originates from bots is not an accident but the logical outcome of digital capitalism. Bots are cheap, tireless, and infinitely scalable, making them the perfect tools for companies, states, and spammers to flood the web with content, scrape data, manipulate search engines, inflate metrics, or spread propaganda. Platforms and search engines are structurally bot-friendly: they reward quantity and speed over authenticity, and their infrastructures—indexing, pricing algorithms, automated services—practically invite automation. In this system, bots thrive because they fulfill the demands of visibility and engagement far more effectively than humans, and in doing so, they reshape the online environment into an economy of appearances.
​
Epistemically, this domination by bots generates a profound distortion. When the majority of signals and flows of information are artificially produced, the human subject is left navigating a landscape where “truth” and “relevance” are algorithmically simulated rather than organically expressed. The epistemic noise overwhelms critical capacities: likes, clicks, and shares no longer reliably indicate human intention but are automated echoes. Visibility becomes divorced from authenticity, and what appears to be consensus or popularity may simply be the byproduct of bot-driven replication. Trust in digital traces becomes fragile, forcing individuals to question whether anything they encounter online reflects genuine recognition or simply the mechanical labor of automated systems.
​
Epistemologically, the implications are even deeper. The web was once imagined as a collective intelligence system, a grand archive of human knowledge. If bots now dominate this archive, then knowledge itself is being co-produced—or even simulated—by non-human agents operating with hidden incentives. The categories of authorship, authenticity, and authority become blurred, and verification requires new forms of literacy. In such a context, digital knowledge risks becoming performative rather than representative: it is staged by algorithms and bots that optimize for visibility instead of accuracy. This inversion challenges the very ontology of knowledge in the digital age, raising the question of whether what we encounter online can still be called knowledge at all, or whether it has collapsed into a spectacle of automated signals.
​
GO FURTHER
​
Liviu Poenaru, Aug. 23, 2025

ARE SOCIETIES AND DEMOCRACIES APHASIC?
Watanabe, T., Inoue, K., Kuniyoshi, Y., Nakajima, K., & Aihara, K. (2025) show that large language models mirror the internal network dynamics of receptive aphasia, with highly polarized patterns of state transitions and dwelling times.
So here’s the punchline of this oddly poetic science experiment: chatbots and human brains with receptive aphasia stumble in surprisingly similar ways. That matters for mental health because it suggests our shiny machines might be replaying certain patterns of dysfunction that we usually only associate with damaged neurons. If AI can mimic the unstable state transitions of an aphasic brain, then clinicians may gain a new diagnostic mirror: machine behavior becomes a proxy for understanding breakdowns in human cognition. The flipside, though, is unsettling—patients may be treated as “glitchy algorithms,” while machines gain a veneer of human pathology. The border between neurology and computation starts to blur, and psychiatry risks being colonized by metaphors drawn from LLMs rather than lived human suffering.
For societies, the comparison is a warning shot. If the tools we use daily—search engines, chatbots, recommendation models—operate in ways akin to a brain disorder, then we are embedding a cognitive vulnerability into public life. Fluency without reliability fuels misinformation, amplifies stress, and chips away at trust in communication. In a culture already over-saturated with noise, deploying systems that mirror pathological processing only deepens collective confusion. This parallel with aphasia underscores how much of our shared reality is being filtered through distorted linguistic engines. Societies built on dialogue risk being reshaped by tools that speak like us but think like damaged brains.
Democracies, in particular, are fragile ecosystems of speech, listening, and deliberation. If public discourse increasingly depends on LLMs that exhibit “aphasic” dynamics—smooth rhetoric masking incoherence—citizens may unknowingly adapt to pathological patterns of sense-making. The danger is not just misinformation but a deeper erosion of cognitive norms: when incoherence becomes normalized, political manipulation thrives, critical debate erodes, and institutions wobble. What began as a quirky overlap between damaged human cognition and clever software turns out to be a structural risk: democratic life might be undermined by machines that, like receptive aphasia, flood the commons with language that sounds meaningful but fails to ground shared understanding.
​
Liviu Poenaru, Aug. 22, 2025

MENTAL HEALTH, ECONOMIC BELIEFS, AND THE NORMALIZATION OF SUFFERING IN 2025 RESEARCH
Liviu Poenaru, Aug. 1, 2025
​
Recent peer-reviewed research published in 2025 at the intersection of mental health and economic beliefs reveals a worrying epistemic limitation: the reduction of suffering to individual malfunction, severed from its social and ideological context. While studies like those of Li & Zeng (2025) and Carter et al. (2025) demonstrate correlations between financial strain and mental health outcomes, they do so through behaviorist and econometric frameworks that treat economic variables as neutral, technical facts. There is no questioning of the symbolic or ideological weight carried by concepts such as “income,” “productivity,” or “value.” These categories are not just descriptive—they are normative constructsthat participate in the psychic structuring of individuals within a capitalist world-system.
​
In these studies, “economic beliefs” are treated as discrete, declarative opinions—about market fairness, job security, or financial self-efficacy. Such beliefs are assessed using survey-based tools and psychological scales, with no reference to their historical or structural production. This methodological individualism ignores the way beliefs about the economy are embedded in larger apparatuses: education, media, workplace ideology, and state discourse. As Bourdieu (1998) argued, neoliberal doxa operates precisely by rendering its own principles invisible and natural. Thus, to measure beliefs without interrogating their social genesis is to participate in the very reproduction of economic ideology.
​
Another recurring theme in this literature is the valorization of mental health as a state of emotional regulation and resilience. While this may appear harmless, it conceals a deeper ideological function: to promote compliance with a socio-economic order that itself produces psychic distress. Studies rarely examine whether the economic system imposes contradictory demands—autonomy and obedience, flexibility and stability, success and solidarity—that generate internal conflict. As Alain Ehrenberg (1998) showed in The Weariness of the Self, the neoliberal subject is not merely autonomous; he is exhausted by the imperative to be so. This exhaustion is not a side-effect—it is structural.
​
What is glaringly absent in the 2025 scientific landscape [from my perspective, which systematically scrapes the scientific literature dedicated to topics related to economic codes and their impacts on mental health] is a framework that would allow us to understand how economic norms become internalized as unconscious mandates. Studies speak of stress, anxiety, or self-doubt, but do not explore how these affective states are shaped by deeply rooted economic expectations: to be useful, to be seen, to succeed. Without a theory of economic subjectivation, or without attention to the symbolic violence of economic rationality (as theorized by Boltanski & Chiapello, 2005), mental health research risks reducing distress to maladaptation. The cost is immense: a whitening of psychic life, where suffering is not denied, but neutralized—stripped of meaning, critique, and potential for transformation.
​
While recent research provides useful data, it fails to question the systemic and ideological matrix from which psychic suffering emerges. By reducing beliefs to opinions and distress to dysfunction, this literature becomes complicit with the very forces it might otherwise diagnose. What is needed is a research agenda that interrogates economic beliefs not as private convictions but as collective constructions, embedded in power, shaped by ideology, and sustained through institutional routines. Only then can mental health research move from pathologizing symptoms to critiquing the systems that produce them.
​
References
Bourdieu, P. (1998). Contre-feux: Propos pour servir à la résistance contre l'invasion néolibérale. Raisons d’agir.
Boltanski, L., & Chiapello, È. (2005). The new spirit of capitalism (G. Elliott, Trans.). Verso.
Carter, A., Nguyen, V., & Sethi, R. (2025). Mental health and economic decision-making: A systematic review of neurobehavioral correlates. Preprints.org.
Ehrenberg, A. (1998). La fatigue d’être soi: Dépression et société. Odile Jacob.
Foucault, M. (2004). Naissance de la biopolitique: Cours au Collège de France (1978–1979). Gallimard/Seuil.
Li, H., & Zeng, Q. (2025). Consumption, relative deprivation and mental health: Evidence from hedonic consumption. Frontiers in Public Health, 13, Article 1511547. https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2025.1511547/full?utm_source=chatgpt.com
​
​

THE EPISTEMIC AUTODAFÉ
Liviu Poenaru, July 11, 2025
​
IT’S OBVIOUS that a scientific article now lasts about as long as a TikTok post. Months or years of research, peer review, and theoretical refinement are granted the same attention span as a 15-second video. In the digital environment where information is consumed in rapid, endless scrolls, even the most rigorous knowledge collapses under the weight of distraction. We are not witnessing the democratization of knowledge, but its trivialization. Visibility no longer depends on depth or substance—it depends on algorithms, immediacy, and emotional traction. The economy of virality has overtaken the economy of understanding.
IT’S OBVIOUS that this is not just a crisis of visibility—it is an epistemic autodafé. Like the book burnings of the 20th century, today’s mass disappearance of thought happens in silence, through speed. Nothing is physically destroyed, but everything is buried under noise. What is not performative, short, and marketable is algorithmically sidelined. Scientific knowledge, especially when critical, interdisciplinary, or complex, is drowned out by platform logic. We are burning texts without flames—making entire epistemologies obsolete not by force, but by acceleration.
IT’S OBVIOUS that if we do not confront this process, knowledge itself will become incompatible with the world we live in. The act of thinking in depth, of engaging with complexity, will be reduced to an eccentricity—or a form of resistance. In this architecture of forgetting, science must reclaim its visibility not by simplifying itself, but by defending the conditions of time, slowness, and memory. Otherwise, we risk a future in which the most urgent discoveries vanish before they are even understood, and where truth, like everything else, is just content waiting to be scrolled past.
​

THE BLACKOUT CHALLENGE AS A PARADIGMATIC METAPHOR OF SOCIAL MEDIA
Liviu Poenaru, July 5, 2025
​
​
The "blackout challenge" emerged as a dangerous online trend, primarily propagated through social media platforms, wherein individuals intentionally induce a state of asphyxiation—often by strangulation using household items like belts or scarves—with the perilous objective of experiencing a fleeting sensation of euphoria or altered consciousness. Participants frequently film these hazardous acts, disseminating them across various digital channels to achieve visibility and viral reach. This practice, disturbingly, toys with the thin line between life and death within an algorithmic theatre where visibility and virality are elevated to supreme values. The core danger is profound: multiple instances of fatalities have been directly linked to this trend.
​
In 2021, a 10-year-old girl named Nylah Anderson from Pennsylvania tragically died after attempting the challenge. Similarly, the same year saw the death of Archie Battersbee, a 12-year-old boy in the UK, whose parents found him unconscious with a ligature around his neck. More recently, in July 2025, a 12-year-old boy named Sébastien in Castleford, UK, also tragically lost his life after participating in this challenge. His family and close ones have since issued stark warnings about the pervasive dangers of such viral phenomena. These are just a few of numerous documented cases across the globe, underscoring the lethal reality behind this online phenomenon.
​
This perilous quest for filmed self-suffocation can be profoundly understood as a metaphorical expression of an asphyxiating relationship with social media itself. Digital platforms, through their inherent design and pervasive influence, often stifle individual expression and genuine individuation by imposing homogenizing behavioral and aesthetic norms. Attention, the vital currency of the digital realm, becomes captured, compressed, and intensely exploited to the point of becoming a scarce commodity—a metaphorical "air" that is bought, sold, and desperately gasped for. Consequently, the pursuit of social recognition within the digital space assumes a pathological vitality: to be seen or to cease to exist becomes the harrowing existential dilemma.
​
The "blackout challenge" further illustrates how users paradoxically believe they are exerting a form of self-control—choosing to suspend their lives, to film themselves, to become objects of attention—when in reality, they are enmeshed within a mimetic control structure. Here, the decision to engage in self-endangerment is merely the culmination of a deeply ingrained process of algorithmic alienation. This act is not a rebellion against the system; rather, it is its logical conclusion in a world where attention capitalism compels individuals to sacrifice their physical bodies for the ephemeral shadow of digital validation.
​
Within the "blackout challenge," the intrinsic meaning of the act is conspicuously absent. It functions neither as a rite of passage, nor a form of protest, nor a genuine personal challenge. Instead, it is a pure production of signal: a shocking image, a brief video clip, a fleeting spike in visibility. The metaphor becomes chillingly evident: nothing needs to signify; everything merely needs to circulate, capture attention, and generate "reach." The body itself transforms into a sacrificial interface within the relentless economy of clicks.
​
As a paradigmatic metaphor, the "blackout challenge" encapsulates several fundamental traits of contemporary social media. It highlights a self-destructive pleasure derived from recognition, often intertwined with a profound anxiety of non-existence when one is not seen, liked, or commented upon. It signifies an extinction of the connection to reality, replaced by the intense, often overwhelming, affects generated within digital spaces. There is also a disturbing inversion of ritual: historically, rites initiated individuals into symbolic life; here, they initiate them into symbolic death, sometimes tragically real.
​
The "blackout challenge" represents an extreme yet profoundly revealing symptom of a desire for disappearance that pervades connected societies. This is not merely an individual aberration but the tragic expression of a systemic issue wherein attention becomes more valuable than life, recognition more vital than breath, and virality more urgent than intrinsic meaning. In this sense, the "blackout challenge" is not an anomaly; it is a dark mirror reflecting our era, a scene of symbolic suffocation where the individual does not perish despite social media, but rather through them, within them, and for them.
GO FURTHER
​
​
DID YOU KNOW THAT DISRUPTION IS THE RULE?
Liviu Poenaru, July 3, 2025
​
DID YOU KNOW THAT the largest psychological experiment in political history was conducted on over 87 million Facebook users—without their consent? Cambridge Analytica, born from military psychological operations (PSYOPs) and backed by Steve Bannon, didn’t just scrape personal data—it militarized it. By hijacking emotional vulnerabilities and exploiting affective triggers like anger, fear, and resentment, the firm engineered digital psychowarfare. This new frontier of manipulation blended psychographic profiling, viral messaging, and AI-enhanced targeting to sway elections, fracture societies, and activate deep-seated prejudices. What seemed like simple online ads were in fact emotionally weaponized scripts tested on specific populations—turning the digital space into a battlefield of psychological conditioning.
DID YOU KNOW THAT the guiding principle behind this strategy wasn’t truth, but disruption? Wylie recounts how chaos was the goal—not an unfortunate side effect. “Nonsense is more effective than truth,” was the underlying creed. Instead of building consensus, the operation deliberately stirred rage, paranoia, and confusion, using fake news, disturbing imagery, and microtargeted propaganda to fragment reality itself. This was not just a tech scandal; it was a global experiment in perspecticide: the systematic breakdown of symbolic thought, shared meaning, and narrative continuity. The algorithm didn’t just track you—it sculpted you. And it did so through your eyes, emotions, and unconscious biases.
DID YOU KNOW THAT your anger could make you less rational and more impulsive—and that this was precisely the effect Cambridge Analytica sought? By triggering emotional overdrive, the firm made voters more susceptible to propaganda, more likely to punish perceived outsiders, and more willing to accept economic self-harm for ideological gratification. Emotional contagion spread like a digital virus across WhatsApp, Facebook, and Twitter, facilitated by Big Data, military contracts, and political ambition. What Wylie ultimately reveals is a blueprint of 21st-century emotional capitalism—one that transforms your pain, fear, and frustration into a lucrative engine of engagement, polarization, and control. This isn’t just manipulation. It’s emotional colonization.
​
​

HYPERMOBILITY AND THE PARADOX OF DIGITAL FREEDOM:
How Informational Movement Fuels Ideological Segregation
Liviu Poenaru, June 30, 2025
​
DID YOU KNOW THAT the more we move in digital environments—clicking, swiping, jumping from one group to another—the more we may unknowingly contribute to ideological segregation and social polarization? Unlike the physical world, where mobility was historically limited and coexistence with diverse perspectives was structurally inevitable, digital spaces allow us to escape disagreement in milliseconds. With a single gesture, we leave discomfort and land in echo chambers filled with validation, sameness, and familiar narratives. This hypermobility feels empowering—but it comes at the cost of cognitive plurality and collective dialogue.​
Computational models like Schelling’s segregation simulations illustrate this dynamic: even when individuals exhibit a high tolerance for diversity, if they are allowed to move freely and widely, they tend to cluster into homogeneous communities. Transposed to digital society, this suggests that the greater our freedom of informational mobility, the more likely we are to seek out the ideologically comfortable. Algorithmic infrastructures further entrench this tendency by amplifying content we already agree with, leading to what researchers describe as “networked homophily”—a feedback loop of identity, preference, and confirmation bias.​
In this sense, more digital movement doesn’t mean more exposure—it means more self-selected filtering. The result is a paradox: our global information networks were supposed to increase access to plurality, but they often reduce us to narrow islands of shared belief, disconnected from the wider social fabric. Polarization is not a failure of digitality—it’s a logical consequence of its architecture, unless deliberate friction, dialogue, and diversity are reintroduced as core values of digital design.
​

DID YOU KNOW
THAT BAD
IS BETTER?
Liviu Poenaru, June 28, 2025
​
Our brains are wired to prioritize negative information. This phenomenon, known as negativity bias, means we're more likely to focus on, and remember, bad news over good or neutral information. This isn't just a quirk; it's a deep-seated evolutionary trait that once helped our ancestors survive by keeping them alert to dangers.
At a synaptic level, learning creates prolonged changes in the strength of synaptic connections, a concept called plasticity. When it comes to negative stimuli, our brains become sensitized, strengthening the synapses associated with dangerous or threatening information. This process is so powerful that it could explain why political and economic powers often use fear: to ensure sensitization to certain stimuli, preventing habituation and cognitive disinterest from setting in.
Recent research from Soroka, Fournier, and Nir (2019) provides robust evidence of this bias across different cultures and countries, showing it's a universal phenomenon. People consistently exhibit stronger psychophysiological reactions to negative news, regardless of their cultural background. This pervasive bias has profound implications for media consumption, as it directly influences what news gets produced and highlighted.
The result? Media outlets, driven by the need to capture attention, are incentivized to disproportionately focus on negative events. This creates a skewed reality, potentially fostering a more pessimistic worldview and contributing to increased stress, anxiety, and depression. It's a lucrative model for digital platforms, which algorithmically capture our attention by emphasizing negativity and perpetuating a culture of suffering. This constant bombardment with increasingly polarized and catastrophic information, negative emotions, and negative comments thrives on capturing attention by any means necessary.
​
​