Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵 arxiv.org/abs/2411.10109
When we presented generative agents last year, we pointed to a future where we can simulate life to understand ourselves better in situations where direct engagement or observation is impossible (e.g., health policies, product launches, or external shocks). (2/14)
But we felt our story was incomplete: to trust these simulations, they ought to avoid flattening agents to demographic stereotypes, and measurement of their accuracy needs to advance beyond replication success or failure on average treatment effects. (3/14)
We found our answer in models of individuals—creating generative agents that reflect real individuals and validating them by measuring how well they replicate the individual's responses to the General Social Survey, Big Five Personality tests, economic games, and RCTs. (4/14)
To achieve this, we turned to a foundational social science method: interviews. We developed a real-time, voice-to-voice AI interviewer that conducted two-hour, semi-structured interviews to teach us about these individuals’ lives and beliefs. (5/14)
Our finding: the agents perform well. They replicate participants' responses on the General Social Survey 85% as accurately as participants replicate their own answers two weeks later, and perform comparably in predicting personality traits and experimental outcomes. (6/14)
In addition, our interview-based agents reduce accuracy biases across racial and ideological groups compared to agents provided with demographic descriptions. We attribute this to the agents in our study reflecting the myriad idiosyncratic factors of real individuals. (7/14)
In sum, this work opens the door to simulating individuals. We believe that accurately modeling the individuals who make up our society ought to be the foundation of simulations. The resulting agent bank of 1,000 generative agents will further facilitate this function. (8/14)
At the same time, this work points to the beginning of an era in which generative agents can represent real people. This ought to bring both excitement and concerns: how can we balance the potential benefits while safeguarding individuals' representation and agency? (9/14)
We spent countless hours discussing ethics with the team, the IRB, and participants. Here’s what we believe: systems hosting generative agents of real people must, at a minimum, support usage audits, provide withdrawal options, and respect individuals' consent and agency. (10/14)
So, to support research while protecting participant privacy, we (Stanford authors) plan to offer a two-pronged access system in the coming months: 1) open access to aggregated responses on fixed tasks, and 2) restricted access to individual responses on open tasks. (11/14)
For those interested, here is an open-source repository and a Python package for this work:
Github:
(While we are not releasing the participant data, I have included my personal generative agent in the repo. :)) (12/14)github.com/joonspk-resear…
In closing, doing great interdisciplinary work that respects the tradition and rigor of each field is beyond any one person. This work would not have been possible without an all-star team that embodied its interdisciplinary nature, intersecting AI and social sciences. (13/14)
Thank you to my coauthors, @msbernst, @percyliang, @RobbWiller, @cqzou, @aaronshaw, @makoshark, @merrierm, @carriejcai. And thank you @KolluriAkaash for helping out with the open source release, and to @StanfordHCI and @StanfordNLP for fostering this work. (14/14)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Our new research estimates that *one in twenty* comments on Reddit are violations of its norms: anti-social behaviors that most subreddits try to moderate. But almost none are moderated.
First, what does this mean? It means if you are scrolling through a post on Reddit, in a single scroll, you will likely see at least one comment that exemplifies bad behaviors such as personal attacks or bigotry that most communities would choose not to see. (2/13)
So let’s get into the details. What did we measure exactly? We measured the proportion of unmoderated comments in the 97 most popular subreddits that are violations of one of its platform norms that most subreddits try to moderate (e.g., personal attacks, bigotry). (3/13)
E.g., say you are creating a new community for discussing a StarWar game with a few rules. Given this description, our tool generated a simulacrum like this: (2/10)
Why are these useful? In social computing design, understanding our design decisions’ impact is hard since many challenges do not arise until a system is populated by *many*. Think about: newcomers with unintentional norm-breaking, trolling, or other antisocial behaviors (3/10)