Joon Sung Park Profile picture
Aug 11, 2022 10 tweets 5 min read Read on X
How might an online community look after many people join? My paper w/ @lindsaypopowski @Carryveggies @merrierm @percyliang @msbernst introduces "social simulacra": a method of generating compelling social behaviors to prototype social designs 🧵
arxiv.org/abs/2208.04024 #uist2022
You can see some of its generated behaviors—posts, replies, trolls—in our demo here: social-simulacra.herokuapp.com

E.g., say you are creating a new community for discussing a StarWar game with a few rules. Given this description, our tool generated a simulacrum like this: (2/10) A screenshot of a synthetic...
Why are these useful? In social computing design, understanding our design decisions’ impact is hard since many challenges do not arise until a system is populated by *many*. Think about: newcomers with unintentional norm-breaking, trolling, or other antisocial behaviors (3/10)
What if we could generate an unbounded number of synthetic users and the social interactions between them that can realistically reflect how actual users might behave in our system designs?

Social simulacra lets you do that. (4/10)
What power social simulacra are LLMs (e.g., GPT-3). We observe that their training data has a wide range of social behavior & they can generate compelling simulacra of possible interactions w/ proper prompting. This lets us ask “what if” questions to iterate on our design. (5/10)
*So what does this all mean?*
For social computing designers, this means they can design “proactively.” Participating designers in our study said that they are in the practice of reactive design, implementing interventions only after a dumpster fire damages the community. (6/10)
Social simulacra could change this equation by helping the designers understand possible modes of failures in their design before they arise and cause harm. (7/10)
Also important: social simulacra (intentionally) generates not only the good, but also the bad behaviors so it can help the community leaders to understand what could go well *and* wrong in their community. But this puts in focus… (8/10)
... the need for a close collaboration with social computing stakeholders and accountability measures/evaluation techniques such as auditing to ensure that our prototyping approach is used for its intended purpose—empowering online communities—and not for auto-trolling. (9/10)
Thanks to @GoogleAI, HPDTRP, @StanfordHAI, and @OpenAI for their support.

And finally, one more thank you to my amazing collaborator on this work, @lindsaypopowski, my mentors, @Carryveggies and @merrierm, and my advisors, @percyliang and @msbernst. (10/10)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Joon Sung Park

Joon Sung Park Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @joon_s_pk

Nov 18, 2024
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵 arxiv.org/abs/2411.10109Image
When we presented generative agents last year, we pointed to a future where we can simulate life to understand ourselves better in situations where direct engagement or observation is impossible (e.g., health policies, product launches, or external shocks). (2/14) Image
But we felt our story was incomplete: to trust these simulations, they ought to avoid flattening agents to demographic stereotypes, and measurement of their accuracy needs to advance beyond replication success or failure on average treatment effects. (3/14)
Read 14 tweets
Aug 30, 2022
Our new research estimates that *one in twenty* comments on Reddit are violations of its norms: anti-social behaviors that most subreddits try to moderate. But almost none are moderated.

🧵 on my upcoming #cscw2022 paper w/ @josephseering and @msbernst: arxiv.org/abs/2208.13094
First, what does this mean? It means if you are scrolling through a post on Reddit, in a single scroll, you will likely see at least one comment that exemplifies bad behaviors such as personal attacks or bigotry that most communities would choose not to see. (2/13)
So let’s get into the details. What did we measure exactly? We measured the proportion of unmoderated comments in the 97 most popular subreddits that are violations of one of its platform norms that most subreddits try to moderate (e.g., personal attacks, bigotry). (3/13)
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(