Kobi Hackenburg Profile picture
phd candidate @oiioxford @uniofoxford | researcher @turinginst | AI, data science, persuasion with language models
Jun 21 12 tweets 2 min read
‼️New preprint: Scaling laws for political persuasion with LLMs‼️

In a large pre-registered experiment (n=25,982), we find evidence that scaling the size of language models yields sharply diminishing persuasive returns:



1/n arxiv.org/abs/2406.14508
Image We find that: 

➡️ current frontier models (GPT-4, Claude-3) are barely more persuasive than models smaller in size by an order of magnitude or more, and  

➡️ mere task completion (coherence, staying on topic) appears to account for larger models' persuasive advantage.
Jun 7 7 tweets 3 min read
🚨New today in @PNASNews ‼️w/ @helenmargetts:

In a pre-registered experiment (n=8,587), we find little evidence that 1:1 personalization — aka microtargeting — enhances the persuasive influence of political messages generated by GPT-4. 



1/7 👇🏼 pnas.org/doi/10.1073/pn…
Image Our findings suggest:

1️⃣Personalizing static political messages with current frontier LLMs may not offer the persuasive advantage that has been widely speculated…

2️⃣but both targeted and non-targeted messages are broadly persuasive.

How we tested this: Image
Jun 15, 2023 13 tweets 10 min read
🚨 New paper! 🚨 w/ @william__brady & @manos_tsakiris:

Today in @PNASNexus we map the moral language of 39 U.S. presidential candidates to show how they're connected / differentiated by their use of moral rhetoric.
ow.ly/8Zwu50OO5QA

A [visual] thread on findings: Image Main takeaways:

1) @TheDemocrats & @GOP
candidates use sharply divergent moral vocabularies

2) Candidates can separate themselves from the rhetorical norms of their party by using unique moral language, but rarely do