Chris Bail (chris_bail_duke 🧵) Profile picture
Mar 23, 2021 13 tweets 7 min read Read on X
1/ Do you feel hopeless about political polarization on social media? Introducing a new suite of apps, bots, and other tools that you can use to make this place less polarizing from our Duke Polarization Lab: polarizationlab.com/our-tools
One of the biggest problems with social media is that it amplifies extremists and mutes moderates, leaving us all feeling more polarized than we really are. Our tools can help you avoid extremists and identify moderates with whom you might engage in more productive conversations.
The Duke Polarization Lab’s Bipartisanship Leaderboard identifies politicians, celebrities, activists, journalists, media outlets, and advocacy groups whose posts get likes from people in both parties: polarizationlab.com/bipartisanship… Image
Meet Polly, a bot who retweets messages from the opinion leaders on our bipartisanship leaderboard every few hours to help you identify people on the other side with whom you might find compromise: polarizationlab.com/our-bots ImageImage
What are the issues on social media where there is room for compromise? This tool tracks terms that both Republicans and Democrats are discussing on Twitter, and analyzes the text of these posts to see if they share the same sentiment about them: polarizationlab.com/issue-tracker Image
What about the trolls? Use our Troll-o-meter to learn how to identify the characteristics of online extremists and monitor the types of language that they use: polarizationlab.com/troll-o-meter.
Finally, what about you? Learn what your posts say about your politics and how your offline views compare to your online behavior using these tools: polarizationlab.com/tweet-ideology & polarizationlab.com/ideologyquiz
You can also use our tools to identify whether you are in an echo chamber. But make sure to read our research that suggests stepping outside your echo chamber can also be counter-productive if not done properly: pnas.org/content/pnas/1… (ungated) Image
We’re also developing a Polarization Pen Pal Network to connect real social media users with opposing views, since research shows brief conversations among non-elite people can have strong depolarizing effects: Image
To learn more about these apps and how we can create a bottom-up movement to fight polarization, read the first chapter of my new book Breaking the Social Media Prism: bit.ly/38Ozzrt If you like it, support the indie booksellers linked here: bit.ly/2OBlsip. Image
Or, join me at one of the next few public lectures I’ll be giving about my new book and the new technology described above listed here: bit.ly/3qUQONX including a free event at @DukeU tomorrow [registration required] Image
If you want to get updates about the new technology we create to improve political discussions on social media, subscribe to our mailing list here: polarizationlab.com/subscribe
If you have ideas about how to make these tools better, reach out: polarizationlab.com/contact-us. We try hard to translate insights from research into actionable tools, but this is easier said than done. We want to make academia more open and transparent, for the benefit of all.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chris Bail (chris_bail_duke 🧵)

Chris Bail (chris_bail_duke 🧵) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chris_bail

May 9, 2022
1/ Before you share this, please consider taking a closer look at the data. This graphic makes it look like all Republicans are anti-science, and all Democrats are pro-science. The reality is probably much different; here's why:
2/ People who answered this question about their confidence in the scientific community only had four choices: "1) A great deal; 2) Only some; 3) Hardly any; 4) Don't know." People who had significant trust in science but not a great deal had no valid response category.
3/ The first red flag is that >10% of people responded "don't know"-- possibly out of frustration. But let's look at the breakdown of those who said "only some" by party that I made (below). Suggests a somewhat different story....
Read 12 tweets
Apr 25, 2022
1/ The *last* thing we need right now is more untested speculation about how to fix social media. We have *research* that can help us evaluate @elonmusk’s proposals to transform Twitter, and many of these studies might inspire him to throw some cold water on his plans 🧵
2/ CLAIM #1: Twitter is applying content moderation unevenly and persecuting people with certain political beliefs more than others. RESEARCH: Twitter is probably not unfairly persecuting conservatives:
3/ If anything, Twitter is probably *amplifying* conservative voices— not only in the U.S. but other countries as well (or at least the voices of elected officials): pnas.org/doi/10.1073/pn…
Read 13 tweets
Apr 25, 2022
It’s a compelling idea, but 6% of Twitter users currently generate about 76% of all political content on the platform, and those 6% people are overwhelmingly from the extremes.
Read 5 tweets
Apr 25, 2022
Some important possible limitations of @elicitorg are mentioned here by @emilymbender-- To her concerns I would also add that I do not think large language models should replace careful human-led literature reviews, rather that I think they can perhaps usefully augment them.
Perhaps she is right that my thread was too "hype-y"-- I was mostly excited because I have seen so few examples of ML applied to human tasks that work so well. In any case, I encourage folks to read her thread (and @elicitorg 's response as well).
If you are new to machine learning and social science/human applications, I recommend starting with @msalganik 's Fragile Families challenge: pnas.org/doi/10.1073/pn…
Read 4 tweets
Apr 21, 2022
1/ Can A.I. do our literature reviews for us? Stop everything and try elicit.org, an amazing new tool that uses large language models to answer research questions via empirical research- in the video below I ask it "Does social media negatively impact mental health?"
2/ It immediately finds several of the most important reviews, and further refines results after you give it feedback. For now, it's limited to Semantic Scholar, which only covers about 60% of research articles, but the proof of concept here is amazing... And there's more:
3/ Even more impressive, it includes tools to help you a) find definition of concepts (like "social media") and even identify unanswered research questions: here I ask it to create a series of research questions about whether social media increases depression.
Read 7 tweets
Mar 28, 2022
1/ In this new piece I ask whether we can improve social media without a basic science of how platforms shape human behavior
2/ I’m worried we’ve simply accepted the status quo— especially because most of our current platforms were never designed to be democracy’s public square.
3/ If we could redesign social media from scratch, where should we begin? Might the social and behavioral sciences help us identify how to optimize the good and bad parts of social media?
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(