phd candidate @oiioxford @uniofoxford | research scientist @AISecurityInst | AI, social data science, persuasion with language models
Jul 21 • 16 tweets • 6 min read
Today (w/ @UniofOxford @Stanford @MIT @LSEnews) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19 LLMs, 707 political issues.
We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more
🧵
RESULTS (pp = percentage points):
‼️New preprint: Scaling laws for political persuasion with LLMs‼️
In a large pre-registered experiment (n=25,982), we find evidence that scaling the size of language models yields sharply diminishing persuasive returns:
➡️ current frontier models (GPT-4, Claude-3) are barely more persuasive than models smaller in size by an order of magnitude or more, and
➡️ mere task completion (coherence, staying on topic) appears to account for larger models' persuasive advantage.
Jun 7, 2024 • 7 tweets • 3 min read
🚨New today in @PNASNews ‼️w/ @helenmargetts:
In a pre-registered experiment (n=8,587), we find little evidence that 1:1 personalization — aka microtargeting — enhances the persuasive influence of political messages generated by GPT-4.
Today in @PNASNexus we map the moral language of 39 U.S. presidential candidates to show how they're connected / differentiated by their use of moral rhetoric. ow.ly/8Zwu50OO5QA
A [visual] thread on findings:
Main takeaways:
1) @TheDemocrats & @GOP
candidates use sharply divergent moral vocabularies
2) Candidates can separate themselves from the rhetorical norms of their party by using unique moral language, but rarely do