Joon Sung Park Profile picture
CS Ph.D. student @StanfordHCI + @StanfordNLP. Previously @MSFTResearch, @IllinoisCS & @Swarthmore. Oil painter. HCI, NLP, generative agents, human-centered AI
Nov 18, 2024 14 tweets 3 min read
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵 arxiv.org/abs/2411.10109Image When we presented generative agents last year, we pointed to a future where we can simulate life to understand ourselves better in situations where direct engagement or observation is impossible (e.g., health policies, product launches, or external shocks). (2/14) Image
Aug 30, 2022 13 tweets 4 min read
Our new research estimates that *one in twenty* comments on Reddit are violations of its norms: anti-social behaviors that most subreddits try to moderate. But almost none are moderated.

🧵 on my upcoming #cscw2022 paper w/ @josephseering and @msbernst: arxiv.org/abs/2208.13094 First, what does this mean? It means if you are scrolling through a post on Reddit, in a single scroll, you will likely see at least one comment that exemplifies bad behaviors such as personal attacks or bigotry that most communities would choose not to see. (2/13)
Aug 11, 2022 10 tweets 5 min read
How might an online community look after many people join? My paper w/ @lindsaypopowski @Carryveggies @merrierm @percyliang @msbernst introduces "social simulacra": a method of generating compelling social behaviors to prototype social designs 🧵
arxiv.org/abs/2208.04024 #uist2022 You can see some of its generated behaviors—posts, replies, trolls—in our demo here: social-simulacra.herokuapp.com

E.g., say you are creating a new community for discussing a StarWar game with a few rules. Given this description, our tool generated a simulacrum like this: (2/10) A screenshot of a synthetic...