📢 New paper: Why do we fall for AI-generated language? #GPT#LaMDA
🔬 In 6 studies on how people identify generated text
🧐 We find people wrongly associate 1st person, family words, contractions,.. with humanity
🤖 Allowing AI systems to generate language MORE human than human
Across email, chat, and social media, AI systems produce smart replies, autocompletes, and translations. AI-generated language is often not identified as such but poses as human language, raising concerns about novel forms of deception and manipulation.
Online self-presentation is essential for establishing trust. We conducted six experiments asking participants (N = 4,650) to identify self-presentation text generated by current language models. Across settings we find that humans cannot identify AI-generated self-presentations.
Why are people bad at identifying generated language? We show that human judgments of AI-generated language are handicapped by intuitive but flawed heuristics such as associating first-person pronouns, authentic words, or family topics with humanity.
These heuristics make our judgment of generated language predictable and manipulable. In three replication experiments, we show that AI systems can exploit our flawed intuition to produce language perceived as more human than human.
We conclude by discussing solutions–such as AI accents or fair use policies–to reduce the deceptive potential of generated language, limiting the subversion of human intuition.