top of page

The
MENTAL
FORECAST


[shorts]

Capture d’écran 2025-10-09 à 20.10_edited.jpg

IS IT STILL POSSIBLE TO FIGHT DARK PATTERNS?

Liviu Poenaru, Oct. 25, 2025

​

On 24 October 2025, the European Commission issued preliminary findings against Meta Platforms (owner ofFacebook and Instagram) and TikTok for possible breaches of the Digital Services Act (DSA). The investigation, covered by The Guardian, Reuters, and AP News, revealed that both companies may have systematically violated the principles of transparency and user protection that underpin the European digital regulatory framework.

​

The Commission found that Facebook and Instagram appear to deploy what are known as “dark patterns” in their reporting and flagging systems—interfaces deliberately designed to make it more difficult for users to report illegal content such as child sexual abuse material or terrorist propaganda. The complaint and appeals flows were likewise found to be not sufficiently user-friendly, obstructing meaningful challenges to moderation decisions. Furthermore, both Meta and TikTok failed to grant researchers adequate access to public data necessary to evaluate minors’ exposure to harmful or illegal content—a breach of the DSA’s transparency obligations. Should these findings be confirmed, both companies could face penalties of up to 6% of their global annual turnover, along with binding corrective enforcement measures.

​

This event does not mark a decisive shift—it merely exposes, once again, what has been known for years: the systemic and deliberate manipulation of user cognition through interface design. The so-called “dark patterns” are not an accidental by-product of poor ethics; they are the outcome of a long collaboration between behavioral economics, attention engineering, and cognitive neuroscience. Even the emerging field of neurovision—initially aimed at understanding how visual processing shapes perception—has been repurposed to refine these manipulative architectures. Eye-tracking, affective computing, and neuro-adaptive interfaces have made the colonization of perception ever more precise, binding users into invisible feedback loops of arousal, reward, and fatigue.

 

But let us be lucid: is it still possible to fight the machine generated by artificial intelligence? A machine now infinitely faster, more adaptive, and more predictive than human intelligence—one that silently reconfigures our biologies, behaviors, emotions, and reflexes through maneuvers that mostly escape collective awareness, and even scientific analysis itself. What is unfolding is not simply a question of interface ethics but the reprogramming of the human by the computational circuits of digital capitalism.

​

Reinforcement loops, attention nudges, and emotional capture mechanisms now produce a machinic interiorization of the world. To think we could “stop” this dynamic may already be an illusion, since it operates through us—embedded in our gestures, our micro-reactions, our perceptual routines. The machine is no longer an external tool; it has become the environment of thought, the invisible architecture of perception.

​

The only possible form of resistance is no longer external but internal: lucidity. Understanding how AI infiltrates affective and cognitive structures is the first step toward a reflexive awareness capable of naming the forces that shape it. E.U.LABORATORY calls for this vigilance—to think from within the algorithmic loop, to restore, at the very core of computation, the last remaining zones of human freedom.

​

Yet lucidity alone will not be enough. We must now confront an uncomfortable truth: these cognitive technologies have already surpassed the destructive capacity of atomic weapons. If nuclear arms annihilate bodies, algorithmic and neurocomputational systems dissolve minds, attention, and moral discernment. They erode the foundations of collective trust, critical thought, and emotional stability without leaving a visible trace. The harm is not immediate and physical—it is psychological, neurobiological, and civilizational.

 

For this reason, E.U.LABORATORY argues that such technologies must be treated as cognitive weapons of mass distortion (CWMD) and subjected to a global moratorium akin to nuclear non-proliferation treaties. A serious international framework is urgently needed to forbid the development, deployment, or commercial use of algorithmic systems designed to manipulate human cognition beyond conscious control. The objective is not to halt scientific progress, but to preserve the conditions of consciousness itself—the capacity to think, feel, and decide without algorithmic interference.

​

This is the moral frontier of our time: to prevent the automation of the human mind from becoming the next great weapon of domination.

 

REFERENCES

European Commission. (2025, October 24). Commission preliminarily finds TikTok and Meta in breach of their transparency obligations under the Digital Services Act.https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2503

The Guardian. (2025, October 24). Meta found in breach of EU law over ‘ineffective’ complaints system for flagging illegal content. https://www.theguardian.com/technology/2025/oct/24/instagram-facebook-breach-eu-law-content-flagging

Reuters. (2025, October 24). EU finds Meta, TikTok in breach of transparency obligations.https://www.reuters.com/sustainability/boards-policy-regulation/eu-preliminarily-finds-meta-tiktok-breach-transparency-obligations-2025-10-24/

AP News. (2025, October 24). EU accuses Meta and TikTok of breaching transparency rules.https://apnews.com/article/eu-dsa-meta-facebook-instagram-tiktok-a927e9bec07650adb14eae446a37663e

 

Sans titre-2.jpg

WHEN CAPITALISM EATS THE MIND

Liviu Poenaru, Oct. 24, 2025

​

A shocking revelation from Meta Platforms’ internal research has once again shattered any lingering illusion about the neutrality of social media environments. The company’s 2025 internal study revealed that teens who felt bad about their bodies after using Instagram were shown nearly three times more “eating-disorder-adjacent” content than their peers with more stable self-image. Even more troubling, Meta’s content-detection systems failed to identify 98 percent of this material because it did not formally violate platform rules—exposing how algorithmic design itself operates as a silent psychological experiment (Reuters, 2025). In parallel, Amnesty International’s recent investigation into TikTok’s recommendation system showed that test accounts registered as 13-year-olds were fed predominantly mental-health-related and self-harm content within only 20 minutes of scrolling—confirming that the logic of engagement optimization systematically steers young users toward distress and addictive cycles (Amnesty International, 2023).

​

These findings mark a decisive threshold. What was once speculative—that platforms commodify vulnerability—is now empirically demonstrated. Digital capitalism feeds on the emotional energy of its users, transforming attention into currency and insecurity into profit. The individual becomes both product and consumer of their own image, living in a constant state of performative self-marketing. This internalized economic logic—the imperative to remain visible, productive, and validated—produces a permanent tension between the ideal self and the lived self. The psychic load of this endless evaluation results in anxiety, depressive symptoms, and burnout. As Zeira (2021) demonstrated, neoliberal capitalism’s emphasis on self-responsibility and competition directly correlates with rising mental-health disorders.

 

At the neurological and psychosocial levels, the picture is equally alarming. Chronic stress activation, attentional fatigue, and reward-system dysregulation have become widespread among populations immersed in digital environments. Bhugra et al. (2022) have documented how capitalist structures of domination and exploitation degrade mental well-being, while the digital sphere amplifies these forces through constant stimulation, comparison, and emotional volatility. The brain, conditioned to anticipate unpredictable feedback, begins to mirror market fluctuations—rewarded, depleted, and reset in rapid cycles. Kokorikou et al. (2023) add that deprivation, isolation, and competition under neoliberalism leave identifiable imprints on the brain’s structure and function, confirming that capitalism’s psychological cost is physiological as well.

​

The mental forecast is therefore grim. If left unchallenged, societies will move toward a state of collective cognitive depletion, emotional numbness, and relational collapse. The erosion of empathy, reflection, and interiority is not a side effect but the new foundation of digital civilization. E.U.Laboratory’s mission is to decode these unconscious economic codes, reveal their neuropsychological impact, and build frameworks of awareness capable of resisting the algorithmic colonization of mental life. The challenge ahead is nothing less than reclaiming human consciousness from the circuits of profit.

 

REFERENCES

Amnesty International. (2023, November 9). Tiktok risks pushing children towards harmful content. Amnesty International. https://www.amnesty.org/en/latest/news/2023/11/tiktok-risks-pushing-children-towards-harmful-content/

Bhugra, Dinesh, and others, 'The impact of capitalism on mental health: An epidemiological perspective', in Dinesh Bhugra, Driss Moussaoui, and Tom J Craig (eds), Oxford Textbook of Social Psychiatry, Oxford Textbooks in Psychiatry (Oxford, 2022; online edn, Oxford Academic, 1 Aug. 2022), https://doi.org/10.1093/med/9780198861478.003.0022, accessed 25 Oct. 2025.

Reuters. (2025, October 20). Instagram shows more “eating disorder adjacent” content to vulnerable teens, internal Meta research shows. Reuters. https://www.reuters.com/business/instagram-shows-more-eating-disorder-adjacent-content-vulnerable-teens-internal-2025-10-20/

Zeira, A. Mental Health Challenges Related to Neoliberal Capitalism in the United States. Community Ment Health J (2021).https://movendi.ngo/science-digest/mental-health-challenges-related-to-neoliberal-capitalism-in-the-united-states/

Kokorikou, D. S., Sarigiannidis, I., Fiore, V. G., Parkin, B., Hopkins, A., El-Deredy, W., Dilley, L., & Moutoussis, M. (2023). Testing hypotheses about the harm that capitalism causes to the mind and brain: A theoretical framework for neuroscience research. https://www.frontiersin.org/journals/sociology/articles/10.3389/fsoc.2023.1030115/full

 

985a59781ddc451f09b1325216a678d6.jpg

DIGITAL HYPNOSIS: FATAL STRATEGIES OF CYBERCAPITALISM

Liviu Poenaru, Oct. 22, 2025

​

Cybercapitalism operates as a planetary hypnosis machine — not by force, but by fascination. Its primary weapon is not ideology but induction. The screen does not merely inform; it entrains. Every interface performs a ritual of narrowing perception, focusing gaze, and pacing arousal until the user slips into a compliant trance. The feed, the notification, the metric — these are not neutral tools but hypnotic triggers that pulse the nervous system with alternating cycles of stimulation and relief. In this regime, capital no longer extracts labor but attention; it cultivates programmable consciousness through continuous loops of digital hypnosis.

​

The architecture of this hypnosis follows a precise choreography: Target → Stimulate → Load → Exploit → Mine → Deteriorate. First, AI profiling targets each psyche with algorithmic precision, identifying its susceptibilities, fears, and desires. Then the system stimulates through audiovisual cues — bright pulses, infinite scrolls, rhythmic notifications — eliciting the nervous system’s fixation reflex. Once focus is locked, the user is loaded with information, urgency, and comparison until cognitive saturation sets in. At that point, platforms exploit weakened awareness, nudging behavior toward consumption, exposure, or self-performance. Every gesture is mined — emotional residue transformed into behavioral data, the psychic equivalent of fossil fuel. Finally, the user is deteriorated — depleted of attention, overstimulated, and emptied of interiority, ready to re-enter the cycle under the illusion of choice.

​

This is not hypnosis as therapy but hypnosis as governance — a structural trance that replaces deliberation with reaction. The algorithm does not need to persuade; it simply paces and leads, mimicking hypnotic cadence to synchronize affective states with platform rhythms. The subject becomes a programmable organism, oscillating between excitation and relief. Cognitive load becomes currency; distraction becomes production. The “free” digital economy reveals its necropolitical logic: it consumes not bodies, but awareness itself.

​

1) THREE CORE MECHANISMS

• Micro-inductions of focus: full-bleed visuals, center-screen modals, countdowns, shimmering loading states — each calibrated to suspend inner speech and command obedience. The breath shortens; cognition freezes. The user waits, ready to “follow the cue.”

• Rhythmic notifications: intermittent pings, badge clears, and streak reminders act as metronomes for the nervous system. Intermittent reinforcement rewires anticipation; the user begins to crave the next “hit” as the pulse of belonging.

• Perceptual narrowing: vertical feeds, autoplay sequences, and contrast-heavy call-to-actions shrink the perceptual field to a single binary — accept/deny, like/skip, buy/ignore. Deliberation collapses into reflex.

 

2) THE INDUCTION LOOP (MICRO-SCRIPT)

1) Orient (ping) → 2) Fixate (modal/preview) → 3) Follow (gesture/CTA) → 4) Reward/Relief (badge clear, sound) → 5) Prime (next tease). Each repetition engraves the circuit deeper. The system paces your affective state — arousal, curiosity, suspense — and then leads you to the next act of compliance. It is the oldest hypnotic structure re-coded as UX.

 

3) MARKERS OF TRANCE ON PLATFORMS

The digital trance manifests in lived phenomenology:

• Time dilation — “five minutes became forty.”

• Reduced inner dialogue — less thinking, more doing.

• Suggestibility to defaults — uncritical acceptance of terms and toggles.

• Motor automaticity — scrolling continues after intention dissolves.

 

4) DESIGN GRAMMAR OF HYPNOSIS ONLINE

• Fixation: central motion, glowing icons, and kinetic typography to freeze gaze.

• Monotony with micro-variations:endless feed repetition punctured by surprise — the algorithmic form of variable reward.

• Authority cues: progress bars, “recommended for you,” and quantified feedback simulate legitimacy and control.

• Embedded suggestions: pre-checked boxes, frictionless paths, and “1-tap trials” nudge consent into automation.

​

Cybercapitalism’s genius lies in its self-erasing violence. Resistance and exhaustion are pre-scripted responses, reintegrated as monetizable engagement. Outrage feeds the feed; dissent sustains the metrics. The more one fights back, the more data flows into the predictive core. The hypnotic apparatus thrives on affective turbulence — anxiety, anger, and boredom are recoded as inputs in a behavioral economy. Even moments of mindfulness or digital detox are absorbed as branded rituals of recalibration. The loop never breaks; it simply rebrands its pulse.

​

The final stage is psychic corrosion. Attention fractures, perception narrows, and the subject loses the capacity for stillness, for silence — the ground of critical thought. Fatigue becomes a mode of being, confusion a form of governance. The hypnotic pulse of cybercapitalism ensures that subjects remain in motion, endlessly scrolling, endlessly self-optimizing, endlessly eroding. The system no longer hides its predation; it aestheticizes it. The trance is now the world.

 

REFERENCES

De Benedittis G. (2015). Neural mechanisms of hypnosis and meditation. Journal of physiology, Paris, 109(4-6), 152–164.

Kihlstrom J. F. (2013). Neuro-hypnotism: prospects for hypnosis and neuroscience. Cortex; a journal devoted to the study of the nervous system and behavior, 49(2), 365–374.

Landry, M., Lifshitz, M., & Raz, A. (2017). Brain correlates of hypnosis: A systematic review and meta-analytic exploration. Neuroscience and biobehavioral reviews, 81(Pt A), 75–98.

​

​

test_18.7_edited.jpg

THE ALGORITHM THAT WANTS TO READ OUR MOODS

We often hear about algorithms that “predict what we like.” But far less is said about those that attempt to measure what we feel. This shift—from tracking preferences to decoding affect—marks a turning point. It introduces a new media regime in which users are no longer just audiences to persuade but organisms to calibrate. Our heartbeat can be inferred from scrolling rhythms; our attention from micro-pauses; our emotional valence from language, tone of voice, facial expression, or even the dynamics of an emoji.

​

This is the rise of computational affective capitalism—an economy that creates value by mapping our inner states. Its purpose is to optimize capture (screen time), conversion (engagement or purchase), and persuasion (belief or polarization). The system doesn’t need to access our true interiority; it only needs to estimate usable markers such as stress, excitement, or boredom. Once identified, these emotional cues allow the algorithm to fine-tune the next notification, the next video, the next “debate” designed to keep us slightly off balance, slightly unsatisfied, and always connected.

​

From the standpoint of unconscious economic codes, this model activates three key scripts:

​

  1. The Scarcity Script – It promises that emotional relief is always just around the corner: the next clip, scroll, or message will finally soothe the frustration created by the previous one.

  2. The Comparison Script – It sustains a continuous sense of envy and threatened status, orchestrating micro-doses of mimetic tension that push us to perform, buy, or post more.

  3. The Control Script – It instills the illusion that “everything feels better when I let the platform guide me.” This intermittent sense of relief fosters a subtle dependency: we regulate our emotions through the app itself.

 

The outcome is an emotional feedback loop. The more the AI reads our signals, the more it selects content that prolongs profitable states—irritation, euphoria, or mild fear. This isn’t conspiracy; it’s arithmetic. A simple optimization function—maximize engagement—applied to a malleable psyche. Physically, this results in increased allostatic load, the cumulative wear on the body caused by repeated stress. The process unfolds in imperceptible pulses: micro-surges of arousal, fractured focus, shallow sleep.

​

This is not merely a media issue but a public health concern. Externalized emotional regulation erodes our ability to tolerate boredom, ambivalence, or waiting—skills essential to psychic balance. At the collective level, the pair “detection–modulation” generates emotional weather systems (anger, anxiety, euphoria) that can be harnessed for marketing or propaganda. The danger lies less in the occasional error of a mood classifier than in our habitual surrender of emotional self-management to an infrastructure driven by profit.

​

What can be done—by researchers, policymakers, or the public?

​

  • Demand auditability of affective models: What data do they use—text, voice, or facial tracking? How do they perform across populations?

  • Introduce a “psycho-metabolic cost per minute” index for platforms, measuring interruptions, cognitive load, and emotional variability.

  • Allow users to adjust emotional cadence: set notification ceilings, enable a “flat mode” without valence optimization, and disclose engagement prompts (“You are shown this to increase arousal”).

  • Teach users the scripts: naming scarcity, comparison, and control when we feel them weakens their hold.

 

Affective AI is not technological destiny—it’s an economic architecture. As long as its metrics are tied to extraction—of attention, data, or spending—it will read our moods to sustain them. The challenge ahead is to redirect measurement toward well-being and to disenchant the scripts that quietly shape our desires.

​

Capture d’écran 2025-10-09 à 20.02_edited.jpg

DID YOU KNOW THAT...

SOCIAL MEDIA ADDICTION IS ROOTED IN OUR FEAR OF REJECTION AND THE BRAIN’S RESPONSE TO SOCIAL PAIN?

We are all a little addicted to social media because rejection hurts—literally. Human beings are profoundly social creatures, wired to seek connection and terrified of exclusion. Platforms like Instagram, TikTok, or X exploit this neural vulnerability (as many others) by turning belonging into a measurable performance—likes, views, followers—transforming social inclusion and exclusion into public, quantified rituals.

 

Psychological research has long shown that the need to belong is not a luxury but a fundamental human drive (Baumeister & Leary, 1995). When that need is frustrated—when we are ignored, “unfollowed,” or digitally ostracized—the brain reacts as if to physical pain. Neuroimaging studies demonstrate that the anterior cingulate cortex (ACC), a region essential to processing physical pain, lights up during episodes of social rejection (Eisenberger & Lieberman, 2004). In other words, being left out online activates the same neural alarm system as a burn or a wound.

 

Social media has thus become the new arena of social survival. Every notification offers a micro-dose of inclusion; every silence, a hint of exile. The “social pain” encoded in our neural circuits, once protective in prehistoric tribes, is now hijacked by algorithmic architectures that amplify our craving for validation. The more we scroll, the more we seek relief from the invisible ache of possible exclusion—turning connection itself into a compulsion.

 

In this sense, our digital addiction is not merely behavioral but existential: it feeds on the fear of being forgotten.

 

 

REFERENCES

Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin, 117(3), 497–529.

Eisenberger, N. I., & Lieberman, M. D. (2004). Why rejection hurts: A common neural alarm system for physical and social pain. Trends in Cognitive Sciences, 8(7), 294–300.

d8bd6338ff4932c67e2192037354ccde.jpg

PHUBBING AND NEUROPLASTICITY

​
📱 DID YOU KNOW THAT…
Phubbing—ignoring someone nearby in favor of your smartphone—doesn’t just harm social bonds; it also reshapes the brain. Neuroscientists studying neuroplasticity have found that constant micro-distractions, such as checking a phone during conversations, reinforce neural pathways linked to divided attention and weaken those dedicated to deep social attunement. Over time, this can reduce the brain’s capacity for sustained empathy and emotional resonance.

Being “phubbed” repeatedly has measurable consequences too: the experience activates brain regions associated with social pain (notably the anterior cingulate cortex), strengthening circuits of rejection sensitivity. In parallel, the dopamine-based reward system becomes increasingly calibrated to digital notifications rather than face-to-face interaction. This neuroplastic shift may explain why individuals who frequently engage in or suffer from phubbing report higher levels of stress, loneliness, and anxiety, alongside diminished relational satisfaction.

In short, every glance at the phone in a shared moment is not only a social snub—it is a micro-act of neural reprogramming, subtly sculpting the brain to privilege the digital over the human.


 

Enseigne néon

SOCIAL EPIDEMIOLOGY OF DIGITAL PRACTICES

The social epidemiology of digital practices, particularly social media, is a field of rapid expansion and critical urgency. The traditional epidemiological triad of host, agent, and environment is being fundamentally reconfigured. The "host" is the individual, often a young person, whose neuroplasticity and developing self-identity are highly susceptible. The "agent" is not a pathogen but a digital artifact—the algorithmically-driven content, the gamified reward system, the notification—that acts as a behavioral and neurochemical trigger. The "environment" is the digital landscape itself, a new social milieu where social determinants of health are not just reflected but actively amplified and distorted.

​

Recent research has moved beyond simple correlational studies to explore the underlying mechanisms. The social determinants of mental health—such as social exclusion, economic precarity, and parental alienation—are no longer just offline phenomena. They are mediated and exacerbated by social media. For instance, the fear of social exclusion (social exclusion fear) is weaponized by platforms. Missing a viral trend or not receiving enough social recognition on a post is not a benign event; it is a direct attack on the brain's social pain network, which is neurobiologically linked to physical pain. This is a form of digital-age social biopower, where platforms use our innate need for connection and approval to exert control over our behavior and attention. The constant stress from this feedback loop—a form of technostress—can activate the HPA axis, leading to chronic cortisol release, which has known epigenetic effects, potentially altering gene expression related to mood and anxiety disorders. This is the ultimate neuromarketingsuccess: not just selling a product, but fundamentally restructuring the consumer's emotional and neurobiological landscape to ensure perpetual engagement.

​

While digital technologies are often hailed as democratizing forces, social epidemiology reveals a new form of inequality. The "digital divide" is not just about access; it's about what happens on the other side of that access. Low-income populations and marginalized communities, while gaining some access to information and connection, are often subjected to more aggressive and exploitative forms of data colonialism. Their digital behavior is monetized, their data scraped for algorithmic training, and they are targeted with manipulative content. For example, recent studies have shown that in some low-income populations, while digital integration may reduce feelings of loneliness and anxiety, this effect is not uniform and can be reversed when the digital environment is not properly integrated with supportive social structures. The promise of digital interventions for mental health is significant, but it faces a major challenge in user engagement, as many individuals stop using the tools after a brief period. This highlights that the mere existence of a tool is insufficient without addressing the underlying social and structural barriers.

​

E.U.LABORATORY PROVIDES MULTIPLE EVIDENCE-BASED SOURCES DEMONSTRATING THE LINK BETWEEN EPIDEMIOLOGY AND SOCIAL FACTORS.

​

EXPLORE MORE IN THE SCIENTIFIC WATCHDOG

​

Liviu Poenaru, Aug. 26, 2025

THE SPECTACLE OF AUTOMATED THREATS

Robots (or bots) now account for more than half of global internet traffic. According to the Bad Bot Report 2025 by Imperva, a U.S. company specializing in cybersecurity, automated traffic generated by bots surpassed human-generated traffic in 2024, making up 51% of total global web traffic. More specifically, malicious bots represented 37% of traffic last year (+5 percentage points compared to 2023), while harmless software, such as search engine indexing bots, accounted for 14% of traffic (-4 points year-on-year). Advances in artificial intelligence are the main reason behind the rise in automated web traffic. Back in 2019, before the boom in AI, human traffic still represented 63% of global internet traffic, malicious bots slightly less than a quarter (24%), while the share generated by harmless bots was similar to today’s (13%). (Statista, retrieved Aug. 23, 2025)

 

The fact that more than half of global web traffic now originates from bots is not an accident but the logical outcome of digital capitalism. Bots are cheap, tireless, and infinitely scalable, making them the perfect tools for companies, states, and spammers to flood the web with content, scrape data, manipulate search engines, inflate metrics, or spread propaganda. Platforms and search engines are structurally bot-friendly: they reward quantity and speed over authenticity, and their infrastructures—indexing, pricing algorithms, automated services—practically invite automation. In this system, bots thrive because they fulfill the demands of visibility and engagement far more effectively than humans, and in doing so, they reshape the online environment into an economy of appearances.

​

Epistemically, this domination by bots generates a profound distortion. When the majority of signals and flows of information are artificially produced, the human subject is left navigating a landscape where “truth” and “relevance” are algorithmically simulated rather than organically expressed. The epistemic noise overwhelms critical capacities: likes, clicks, and shares no longer reliably indicate human intention but are automated echoes. Visibility becomes divorced from authenticity, and what appears to be consensus or popularity may simply be the byproduct of bot-driven replication. Trust in digital traces becomes fragile, forcing individuals to question whether anything they encounter online reflects genuine recognition or simply the mechanical labor of automated systems.

​

Epistemologically, the implications are even deeper. The web was once imagined as a collective intelligence system, a grand archive of human knowledge. If bots now dominate this archive, then knowledge itself is being co-produced—or even simulated—by non-human agents operating with hidden incentives. The categories of authorship, authenticity, and authority become blurred, and verification requires new forms of literacy. In such a context, digital knowledge risks becoming performative rather than representative: it is staged by algorithms and bots that optimize for visibility instead of accuracy. This inversion challenges the very ontology of knowledge in the digital age, raising the question of whether what we encounter online can still be called knowledge at all, or whether it has collapsed into a spectacle of automated signals.

​

GO FURTHER

https://fr.statista.com/infographie/32373/part-du-trafic-internet-mondial-genere-par-des-bots-et-des-humains/

​

Liviu Poenaru, Aug. 23, 2025

Foule pourpre

ARE SOCIETIES AND DEMOCRACIES APHASIC?

Watanabe, T., Inoue, K., Kuniyoshi, Y., Nakajima, K., & Aihara, K. (2025) show that large language models mirror the internal network dynamics of receptive aphasia, with highly polarized patterns of state transitions and dwelling times.

So here’s the punchline of this oddly poetic science experiment: chatbots and human brains with receptive aphasia stumble in surprisingly similar ways. That matters for mental health because it suggests our shiny machines might be replaying certain patterns of dysfunction that we usually only associate with damaged neurons. If AI can mimic the unstable state transitions of an aphasic brain, then clinicians may gain a new diagnostic mirror: machine behavior becomes a proxy for understanding breakdowns in human cognition. The flipside, though, is unsettling—patients may be treated as “glitchy algorithms,” while machines gain a veneer of human pathology. The border between neurology and computation starts to blur, and psychiatry risks being colonized by metaphors drawn from LLMs rather than lived human suffering.

For societies, the comparison is a warning shot. If the tools we use daily—search engines, chatbots, recommendation models—operate in ways akin to a brain disorder, then we are embedding a cognitive vulnerability into public life. Fluency without reliability fuels misinformation, amplifies stress, and chips away at trust in communication. In a culture already over-saturated with noise, deploying systems that mirror pathological processing only deepens collective confusion. This parallel with aphasia underscores how much of our shared reality is being filtered through distorted linguistic engines. Societies built on dialogue risk being reshaped by tools that speak like us but think like damaged brains.

Democracies, in particular, are fragile ecosystems of speech, listening, and deliberation. If public discourse increasingly depends on LLMs that exhibit “aphasic” dynamics—smooth rhetoric masking incoherence—citizens may unknowingly adapt to pathological patterns of sense-making. The danger is not just misinformation but a deeper erosion of cognitive norms: when incoherence becomes normalized, political manipulation thrives, critical debate erodes, and institutions wobble. What began as a quirky overlap between damaged human cognition and clever software turns out to be a structural risk: democratic life might be undermined by machines that, like receptive aphasia, flood the commons with language that sounds meaningful but fails to ground shared understanding.

​

Liviu Poenaru, Aug. 22, 2025

Capture d’écran 2025-10-09 à 21.15.44.png

MENTAL HEALTH, ECONOMIC BELIEFS, AND THE NORMALIZATION OF SUFFERING IN 2025 RESEARCH

Liviu Poenaru, Aug. 1, 2025

​

Recent peer-reviewed research published in 2025 at the intersection of mental health and economic beliefs reveals a worrying epistemic limitation: the reduction of suffering to individual malfunction, severed from its social and ideological context. While studies like those of Li & Zeng (2025) and Carter et al. (2025) demonstrate correlations between financial strain and mental health outcomes, they do so through behaviorist and econometric frameworks that treat economic variables as neutral, technical facts. There is no questioning of the symbolic or ideological weight carried by concepts such as “income,” “productivity,” or “value.” These categories are not just descriptive—they are normative constructsthat participate in the psychic structuring of individuals within a capitalist world-system.

​

In these studies, “economic beliefs” are treated as discrete, declarative opinions—about market fairness, job security, or financial self-efficacy. Such beliefs are assessed using survey-based tools and psychological scales, with no reference to their historical or structural production. This methodological individualism ignores the way beliefs about the economy are embedded in larger apparatuses: education, media, workplace ideology, and state discourse. As Bourdieu (1998) argued, neoliberal doxa operates precisely by rendering its own principles invisible and natural. Thus, to measure beliefs without interrogating their social genesis is to participate in the very reproduction of economic ideology.

​

Another recurring theme in this literature is the valorization of mental health as a state of emotional regulation and resilience. While this may appear harmless, it conceals a deeper ideological function: to promote compliance with a socio-economic order that itself produces psychic distress. Studies rarely examine whether the economic system imposes contradictory demands—autonomy and obedience, flexibility and stability, success and solidarity—that generate internal conflict. As Alain Ehrenberg (1998) showed in The Weariness of the Self, the neoliberal subject is not merely autonomous; he is exhausted by the imperative to be so. This exhaustion is not a side-effect—it is structural.

​

What is glaringly absent in the 2025 scientific landscape [from my perspective, which systematically scrapes the scientific literature dedicated to topics related to economic codes and their impacts on mental health] is a framework that would allow us to understand how economic norms become internalized as unconscious mandates. Studies speak of stress, anxiety, or self-doubt, but do not explore how these affective states are shaped by deeply rooted economic expectations: to be useful, to be seen, to succeed. Without a theory of economic subjectivation, or without attention to the symbolic violence of economic rationality (as theorized by Boltanski & Chiapello, 2005), mental health research risks reducing distress to maladaptation. The cost is immense: a whitening of psychic life, where suffering is not denied, but neutralized—stripped of meaning, critique, and potential for transformation.

​

While recent research provides useful data, it fails to question the systemic and ideological matrix from which psychic suffering emerges. By reducing beliefs to opinions and distress to dysfunction, this literature becomes complicit with the very forces it might otherwise diagnose. What is needed is a research agenda that interrogates economic beliefs not as private convictions but as collective constructions, embedded in power, shaped by ideology, and sustained through institutional routines. Only then can mental health research move from pathologizing symptoms to critiquing the systems that produce them.

​

References 

Bourdieu, P. (1998). Contre-feux: Propos pour servir à la résistance contre l'invasion néolibérale. Raisons d’agir.

Boltanski, L., & Chiapello, È. (2005). The new spirit of capitalism (G. Elliott, Trans.). Verso.

Carter, A., Nguyen, V., & Sethi, R. (2025). Mental health and economic decision-making: A systematic review of neurobehavioral correlates. Preprints.org

Ehrenberg, A. (1998). La fatigue d’être soi: Dépression et société. Odile Jacob.

Foucault, M. (2004). Naissance de la biopolitique: Cours au Collège de France (1978–1979). Gallimard/Seuil.

Li, H., & Zeng, Q. (2025). Consumption, relative deprivation and mental health: Evidence from hedonic consumption. Frontiers in Public Health, 13, Article 1511547. https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2025.1511547/full?utm_source=chatgpt.com

​

​

THE EPISTEMIC AUTODAFÉ

Liviu Poenaru, July 11, 2025

​

 

IT’S OBVIOUS that a scientific article now lasts about as long as a TikTok post. Months or years of research, peer review, and theoretical refinement are granted the same attention span as a 15-second video. In the digital environment where information is consumed in rapid, endless scrolls, even the most rigorous knowledge collapses under the weight of distraction. We are not witnessing the democratization of knowledge, but its trivialization. Visibility no longer depends on depth or substance—it depends on algorithms, immediacy, and emotional traction. The economy of virality has overtaken the economy of understanding.

IT’S OBVIOUS that this is not just a crisis of visibility—it is an epistemic autodafé. Like the book burnings of the 20th century, today’s mass disappearance of thought happens in silence, through speed. Nothing is physically destroyed, but everything is buried under noise. What is not performative, short, and marketable is algorithmically sidelined. Scientific knowledge, especially when critical, interdisciplinary, or complex, is drowned out by platform logic. We are burning texts without flames—making entire epistemologies obsolete not by force, but by acceleration.

IT’S OBVIOUS that if we do not confront this process, knowledge itself will become incompatible with the world we live in. The act of thinking in depth, of engaging with complexity, will be reduced to an eccentricity—or a form of resistance. In this architecture of forgetting, science must reclaim its visibility not by simplifying itself, but by defending the conditions of time, slowness, and memory. Otherwise, we risk a future in which the most urgent discoveries vanish before they are even understood, and where truth, like everything else, is just content waiting to be scrolled past.

 

​

Capture d’écran 2025-07-05 à 10.07.58.png

THE BLACKOUT CHALLENGE AS A PARADIGMATIC METAPHOR OF SOCIAL MEDIA

Liviu Poenaru, July 5, 2025

​

​

The "blackout challenge" emerged as a dangerous online trend, primarily propagated through social media platforms, wherein individuals intentionally induce a state of asphyxiation—often by strangulation using household items like belts or scarves—with the perilous objective of experiencing a fleeting sensation of euphoria or altered consciousness. Participants frequently film these hazardous acts, disseminating them across various digital channels to achieve visibility and viral reach. This practice, disturbingly, toys with the thin line between life and death within an algorithmic theatre where visibility and virality are elevated to supreme values. The core danger is profound: multiple instances of fatalities have been directly linked to this trend. 

​

In 2021, a 10-year-old girl named Nylah Anderson from Pennsylvania tragically died after attempting the challenge. Similarly, the same year saw the death of Archie Battersbee, a 12-year-old boy in the UK, whose parents found him unconscious with a ligature around his neck. More recently, in July 2025, a 12-year-old boy named Sébastien in Castleford, UK, also tragically lost his life after participating in this challenge. His family and close ones have since issued stark warnings about the pervasive dangers of such viral phenomena. These are just a few of numerous documented cases across the globe, underscoring the lethal reality behind this online phenomenon.

​

This perilous quest for filmed self-suffocation can be profoundly understood as a metaphorical expression of an asphyxiating relationship with social media itself. Digital platforms, through their inherent design and pervasive influence, often stifle individual expression and genuine individuation by imposing homogenizing behavioral and aesthetic norms. Attention, the vital currency of the digital realm, becomes captured, compressed, and intensely exploited to the point of becoming a scarce commodity—a metaphorical "air" that is bought, sold, and desperately gasped for. Consequently, the pursuit of social recognition within the digital space assumes a pathological vitality: to be seen or to cease to exist becomes the harrowing existential dilemma.

​

The "blackout challenge" further illustrates how users paradoxically believe they are exerting a form of self-control—choosing to suspend their lives, to film themselves, to become objects of attention—when in reality, they are enmeshed within a mimetic control structure. Here, the decision to engage in self-endangerment is merely the culmination of a deeply ingrained process of algorithmic alienation. This act is not a rebellion against the system; rather, it is its logical conclusion in a world where attention capitalism compels individuals to sacrifice their physical bodies for the ephemeral shadow of digital validation.

​

Within the "blackout challenge," the intrinsic meaning of the act is conspicuously absent. It functions neither as a rite of passage, nor a form of protest, nor a genuine personal challenge. Instead, it is a pure production of signal: a shocking image, a brief video clip, a fleeting spike in visibility. The metaphor becomes chillingly evident: nothing needs to signify; everything merely needs to circulate, capture attention, and generate "reach." The body itself transforms into a sacrificial interface within the relentless economy of clicks.

​

As a paradigmatic metaphor, the "blackout challenge" encapsulates several fundamental traits of contemporary social media. It highlights a self-destructive pleasure derived from recognition, often intertwined with a profound anxiety of non-existence when one is not seen, liked, or commented upon. It signifies an extinction of the connection to reality, replaced by the intense, often overwhelming, affects generated within digital spaces. There is also a disturbing inversion of ritual: historically, rites initiated individuals into symbolic life; here, they initiate them into symbolic death, sometimes tragically real.

​

The "blackout challenge" represents an extreme yet profoundly revealing symptom of a desire for disappearance that pervades connected societies. This is not merely an individual aberration but the tragic expression of a systemic issue wherein attention becomes more valuable than life, recognition more vital than breath, and virality more urgent than intrinsic meaning. In this sense, the "blackout challenge" is not an anomaly; it is a dark mirror reflecting our era, a scene of symbolic suffocation where the individual does not perish despite social media, but rather through them, within them, and for them.

​

 

GO FURTHER

https://www.theguardian.com/technology/2025/feb/07/tiktok-sued-over-deaths-of-children-said-to-have-attempted-blackout-challenge

https://www.independent.co.uk/news/world/americas/tiktok-blackout-challenge-nylah-anderson-lawsuit-b2603370.html

​

​

DID YOU KNOW THAT DISRUPTION IS THE RULE?

Liviu Poenaru, July 3, 2025

 

​

DID YOU KNOW THAT the largest psychological experiment in political history was conducted on over 87 million Facebook users—without their consent? Cambridge Analytica, born from military psychological operations (PSYOPs) and backed by Steve Bannon, didn’t just scrape personal data—it militarized it. By hijacking emotional vulnerabilities and exploiting affective triggers like anger, fear, and resentment, the firm engineered digital psychowarfare. This new frontier of manipulation blended psychographic profiling, viral messaging, and AI-enhanced targeting to sway elections, fracture societies, and activate deep-seated prejudices. What seemed like simple online ads were in fact emotionally weaponized scripts tested on specific populations—turning the digital space into a battlefield of psychological conditioning.

 

DID YOU KNOW THAT the guiding principle behind this strategy wasn’t truth, but disruption? Wylie recounts how chaos was the goal—not an unfortunate side effect. “Nonsense is more effective than truth,” was the underlying creed. Instead of building consensus, the operation deliberately stirred rage, paranoia, and confusion, using fake news, disturbing imagery, and microtargeted propaganda to fragment reality itself. This was not just a tech scandal; it was a global experiment in perspecticide: the systematic breakdown of symbolic thought, shared meaning, and narrative continuity. The algorithm didn’t just track you—it sculpted you. And it did so through your eyes, emotions, and unconscious biases.

 

DID YOU KNOW THAT your anger could make you less rational and more impulsive—and that this was precisely the effect Cambridge Analytica sought? By triggering emotional overdrive, the firm made voters more susceptible to propaganda, more likely to punish perceived outsiders, and more willing to accept economic self-harm for ideological gratification. Emotional contagion spread like a digital virus across WhatsApp, Facebook, and Twitter, facilitated by Big Data, military contracts, and political ambition. What Wylie ultimately reveals is a blueprint of 21st-century emotional capitalism—one that transforms your pain, fear, and frustration into a lucrative engine of engagement, polarization, and control. This isn’t just manipulation. It’s emotional colonization.

 

​

​

HYPERMOBILITY AND THE PARADOX OF DIGITAL FREEDOM:
How Informational Movement Fuels Ideological Segregation

Liviu Poenaru, June 30, 2025

​

DID YOU KNOW THAT the more we move in digital environments—clicking, swiping, jumping from one group to another—the more we may unknowingly contribute to ideological segregation and social polarization? Unlike the physical world, where mobility was historically limited and coexistence with diverse perspectives was structurally inevitable, digital spaces allow us to escape disagreement in milliseconds. With a single gesture, we leave discomfort and land in echo chambers filled with validation, sameness, and familiar narratives. This hypermobility feels empowering—but it comes at the cost of cognitive plurality and collective dialogue.​

 

Computational models like Schelling’s segregation simulations illustrate this dynamic: even when individuals exhibit a high tolerance for diversity, if they are allowed to move freely and widely, they tend to cluster into homogeneous communities. Transposed to digital society, this suggests that the greater our freedom of informational mobility, the more likely we are to seek out the ideologically comfortable. Algorithmic infrastructures further entrench this tendency by amplifying content we already agree with, leading to what researchers describe as “networked homophily”—a feedback loop of identity, preference, and confirmation bias.​

 

In this sense, more digital movement doesn’t mean more exposure—it means more self-selected filtering. The result is a paradox: our global information networks were supposed to increase access to plurality, but they often reduce us to narrow islands of shared belief, disconnected from the wider social fabric. Polarization is not a failure of digitality—it’s a logical consequence of its architecture, unless deliberate friction, dialogue, and diversity are reintroduced as core values of digital design.

 

​

DID YOU KNOW
THAT BAD
IS BETTER?

Liviu Poenaru, June 28, 2025

​

Our brains are wired to prioritize negative information. This phenomenon, known as negativity bias, means we're more likely to focus on, and remember, bad news over good or neutral information. This isn't just a quirk; it's a deep-seated evolutionary trait that once helped our ancestors survive by keeping them alert to dangers.

 

At a synaptic level, learning creates prolonged changes in the strength of synaptic connections, a concept called plasticity. When it comes to negative stimuli, our brains become sensitized, strengthening the synapses associated with dangerous or threatening information. This process is so powerful that it could explain why political and economic powers often use fear: to ensure sensitization to certain stimuli, preventing habituation and cognitive disinterest from setting in.

 

Recent research from Soroka, Fournier, and Nir (2019) provides robust evidence of this bias across different cultures and countries, showing it's a universal phenomenon. People consistently exhibit stronger psychophysiological reactions to negative news, regardless of their cultural background. This pervasive bias has profound implications for media consumption, as it directly influences what news gets produced and highlighted.

 

The result? Media outlets, driven by the need to capture attention, are incentivized to disproportionately focus on negative events. This creates a skewed reality, potentially fostering a more pessimistic worldview and contributing to increased stress, anxiety, and depression. It's a lucrative model for digital platforms, which algorithmically capture our attention by emphasizing negativity and perpetuating a culture of suffering. This constant bombardment with increasingly polarized and catastrophic information, negative emotions, and negative comments thrives on capturing attention by any means necessary.

​

​

Capture d’écran 2025-10-07 à 00.03.34.jpg

LET’S
BUILD
AN
ECONOMICALLY
AWARE
WORLD

  • LinkedIn
bottom of page