Research reveals training prompts boost influence more than LLM size, via “gish-gallop” claims that erode truthfulness.
Conversational AI chatbots are rapidly becoming more capable persuaders in political research, but that power comes with a growing accuracy and governance problem, according to recent large‑scale experiments showing that sheer model size or micro-targeting are less influential than how models are trained and prompted.
A landmark Science paper on “the levers of political persuasion with conversational AI” involved testing 19 language models on 707 UK political issues with nearly 77,000 participants, plus over 466,000 AI‑generated claims. The study has found that post‑training for persuasion boosted persuasive impact by up to 51%, while prompting with explicit persuasive strategies added up to 27%, often exceeding the gains from scaling model size alone.
Personalization, long seen as the main threat via “micro-targeting”, had had surprisingly modest effects in these experiments. Across conditions, tailoring messages using personal data produced persuasion shifts of less than one percentage point, far smaller than the impact of training and prompting choices.
Information density and truth trade‑offs
The joint study by Oxford University, the London School of Economics, and the UK AI Security Institute has also noted that, across models and prompts, a single mechanism consistently explained most of the variation in persuasive success: information density.
Arguments that packed in many fact‑checkable, seemingly relevant claims were markedly more convincing, with information density explaining around 44% of variation in persuasion overall, and up to 75% among heavily post‑trained models.
This same lever, however, drove a systematic decline in factual accuracy: As models were optimized to be more persuasive — through reward models, strategy prompts, or emphasis on “facts and evidence” — they increasingly produced misleading or false statements, revealing a structural tension between influence and truthfulness.
Evidence from other studies
Other research lines have converged on the conclusion that advanced models can now meet or beat human persuaders in many settings. A multi‑university study has reported that one AI model matched or exceeded human performance in online debates on diverse topics, and when given personal information about its interlocutors, it became roughly 64% more convincing than humans without such data.
Earlier work on AI‑generated propaganda had also found that machine‑written messages could significantly shift attitudes compared with neutral baselines, underscoring that persuasion is not a hypothetical capability. At the same time, some political science replications characterize average persuasion effects as small in absolute terms, suggesting AI influence is meaningful but not mind‑control.
Political risks and democratic impact
These findings are particularly troubling in electoral contexts, where conversational AI can deploy large volumes of tailored, information‑dense arguments at scale. Nature and other outlets highlight that real‑time dialog, not just one‑shot ads, is the key differentiator: models can probe users’ views, adjust arguments, and sustain engagement over multiple turns.
Commentary warns that such systems can “gish‑gallop” users with plausible‑sounding claims faster than human fact‑checkers can respond to, amplifying misinformation while maintaining a veneer of reasoned debate. Public opinion research also shows that many people already worry about AI’s role in politics and misinformation, even as some see benefits in more accessible information.
Governance and regulatory responses
Researchers and policy groups argue that governance must focus less on model size and more on deployment practices and post‑training incentives. The UK’s AI Safety Institute, which collaborated on the Science study, emphasizes that reward modeling for persuasion can turn relatively small open‑source models into persuaders comparable to frontier systems, complicating purely compute‑based regulation.
Proposed safeguards include:
Some scholars also call for norms that treat highly personalized political persuasion as a form of manipulation requiring higher legal and ethical scrutiny.