phd candidate @oiioxford @uniofoxford | researcher @turinginst | AI, data science, persuasion with language models
Jun 21 • 12 tweets • 2 min read
‼️New preprint: Scaling laws for political persuasion with LLMs‼️
In a large pre-registered experiment (n=25,982), we find evidence that scaling the size of language models yields sharply diminishing persuasive returns:
➡️ current frontier models (GPT-4, Claude-3) are barely more persuasive than models smaller in size by an order of magnitude or more, and
➡️ mere task completion (coherence, staying on topic) appears to account for larger models' persuasive advantage.
Jun 7 • 7 tweets • 3 min read
🚨New today in @PNASNews ‼️w/ @helenmargetts:
In a pre-registered experiment (n=8,587), we find little evidence that 1:1 personalization — aka microtargeting — enhances the persuasive influence of political messages generated by GPT-4.
Today in @PNASNexus we map the moral language of 39 U.S. presidential candidates to show how they're connected / differentiated by their use of moral rhetoric. ow.ly/8Zwu50OO5QA
A [visual] thread on findings:
Main takeaways:
1) @TheDemocrats & @GOP
candidates use sharply divergent moral vocabularies
2) Candidates can separate themselves from the rhetorical norms of their party by using unique moral language, but rarely do