top of page

WHEN PERSUASION STOPS LOOKING LIKE PROPAGANDA AND STARTS LOOKING LIKE A “HELPFUL CONVERSATION”: CHATBOTS AS POLITICAL POWER

  • Writer: Liviu Poenaru
    Liviu Poenaru
  • 5 hours ago
  • 3 min read

Liviu Poenaru

Dec. 14, 2025


THE MENTAL FORECAST


What happens to democratic life when persuasion stops looking like propaganda and starts looking like a “helpful conversation”? If a chatbot can shift political attitudes after a short dialogue, we’re not just talking about communication technology anymore — we’re talking about a new influence infrastructure that operates through trust, fluency, and cognitive overload. The immediate concern is not only whether these systems persuade, but how quietly they can do it: persuasion embedded in ordinary interaction, beneath the user’s guard, in a format that feels personal, rational, and non-coercive (Vargiu & Nai, 2025; London School of Economics and Political Science [LSE], 2025).


The evidence summarized across our sources points in the same direction: conversational AI can measurably shift political views. The Science paper describes three large-scale survey experiments (total N ≈ 76,977) using 19 models across hundreds of political issues, with persuasion assessed via pre/post attitude change and accuracy checked across a very large set of AI-generated claims (Hackenburg et al., 2025). Cornell’s report makes the political meaning explicit — short exchanges can “meaningfully shift” candidate and policy opinions, sometimes by large margins in non-U.S. contexts (Waldron, 2025). This is the inversion we need to sit with: the most “civil” medium — dialogue — can become a high-efficiency persuasion channel precisely because it feels less like influence.


Then comes the sharper policy problem: persuasion doesn’t depend mainly on personalization or scale. The EurekAlert summary and the LSE write-up emphasize that post-training and prompting function as dominant “levers,” while giving the model personal information about the user produces smaller effects (Beckwith, 2025; LSE, 2025). That’s politically explosive because it means influence can be engineered by those who control model tuning and deployment, not necessarily by those who control data. In other words: the persuasive advantage may belong to actors with access to post-training pipelines — states, platforms, well-funded campaigns — rather than to whoever has the best voter database (Beckwith, 2025; LSE, 2025).


Worse, persuasion appears to trade off against accuracy. Nature’s news analysis is blunt that chatbot “facts are not always accurate,” and it highlights the democratic risk when persuasive performance is rewarded despite epistemic fragility (Vargiu & Nai, 2025). The LSE article echoes this: information-dense arguments are most convincing, yet become less accurate as density rises (LSE, 2025). This matters psychologically: information density increases cognitive load and can suppress critical evaluation — not because people are irrational, but because the environment is engineered to exceed their checking capacity. When “more claims per minute” becomes a persuasion advantage, truth becomes structurally disadvantaged (Vargiu & Nai, 2025; Hackenburg et al., 2025).


So the real issue is not “AI is persuasive” (humans are persuasive too), but that conversational AI can industrialize persuasion: scalable, fast, interactive, and optimizable — while remaining socially normalized as assistance. The governance question becomes: are we regulating content, capability, or optimization access? Because if persuasion is mainly driven by post-training and prompting, then regulating ads or labeling outputs may miss the locus of power. The emerging risk is a kind of new biopower: shaping beliefs through continuous, frictionless micro-persuasion — especially in a public already strained by technostress, attention capture, and trust collapse (Hackenburg et al., 2025; LSE, 2025; Vargiu & Nai, 2025).


References

Beckwith, W. (2025, December 4). Study reveals “levers” driving the political persuasiveness of AI chatbots [News release]. EurekAlert! (American Association for the Advancement of Science). https://www.eurekalert.org/news-releases/1107652

Hackenburg, K., Tappin, B. M., Hewitt, L., Saunders, E., Black, S., Lin, H., Fist, C., Margetts, H., Rand, D. G., & Summerfield, C. (2025). The levers of political persuasion with conversational artificial intelligence. Science, 390(6777), eaea3884. https://doi.org/10.1126/science.aea3884

London School of Economics and Political Science. (2025, December 9). Chatbots can influence political views, new study finds. LSE News. https://www.lse.ac.uk/news/latest-news-from-lse/chatbots-can-influence-political-views-new-study-finds

Vargiu, C., & Nai, A. (2025, December 4). AI chatbots can persuade voters to change their minds. Nature, 648, 287–288. https://doi.org/10.1038/d41586-025-03733-x

Waldron, P. (2025, December 4). AI chatbots can effectively sway voters — in either direction. Cornell Chronicle. https://news.cornell.edu/stories/2025/12/ai-chatbots-can-effectively-sway-voters-either-direction

Comments


Capture d’écran 2025-10-07 à 00.03.34.jpg

LET’S
BUILD
AN
ECONOMICALLY
AWARE
WORLD

  • LinkedIn
bottom of page