Chris Bail (chris_bail_duke 🧵) Profile picture
Duke Professor directing https://t.co/oLxcnFusBQ & https://t.co/1DxdkCv8mo, more active at https://t.co/NnnIB9JbHs… & https://t.co/Q0kHKHOowG
kiddphunk Profile picture Potato Of Reason Profile picture 2 subscribed
May 9, 2022 12 tweets 5 min read
1/ Before you share this, please consider taking a closer look at the data. This graphic makes it look like all Republicans are anti-science, and all Democrats are pro-science. The reality is probably much different; here's why: 2/ People who answered this question about their confidence in the scientific community only had four choices: "1) A great deal; 2) Only some; 3) Hardly any; 4) Don't know." People who had significant trust in science but not a great deal had no valid response category.
Apr 25, 2022 13 tweets 4 min read
1/ The *last* thing we need right now is more untested speculation about how to fix social media. We have *research* that can help us evaluate @elonmusk’s proposals to transform Twitter, and many of these studies might inspire him to throw some cold water on his plans 🧵 2/ CLAIM #1: Twitter is applying content moderation unevenly and persecuting people with certain political beliefs more than others. RESEARCH: Twitter is probably not unfairly persecuting conservatives:
Apr 25, 2022 5 tweets 2 min read
It’s a compelling idea, but 6% of Twitter users currently generate about 76% of all political content on the platform, and those 6% people are overwhelmingly from the extremes. pewresearch.org/politics/2019/…
Apr 25, 2022 4 tweets 2 min read
Some important possible limitations of @elicitorg are mentioned here by @emilymbender-- To her concerns I would also add that I do not think large language models should replace careful human-led literature reviews, rather that I think they can perhaps usefully augment them. Perhaps she is right that my thread was too "hype-y"-- I was mostly excited because I have seen so few examples of ML applied to human tasks that work so well. In any case, I encourage folks to read her thread (and @elicitorg 's response as well).
Apr 21, 2022 7 tweets 3 min read
1/ Can A.I. do our literature reviews for us? Stop everything and try elicit.org, an amazing new tool that uses large language models to answer research questions via empirical research- in the video below I ask it "Does social media negatively impact mental health?" 2/ It immediately finds several of the most important reviews, and further refines results after you give it feedback. For now, it's limited to Semantic Scholar, which only covers about 60% of research articles, but the proof of concept here is amazing... And there's more:
Mar 28, 2022 12 tweets 2 min read
1/ In this new piece I ask whether we can improve social media without a basic science of how platforms shape human behavior 2/ I’m worried we’ve simply accepted the status quo— especially because most of our current platforms were never designed to be democracy’s public square.
Dec 21, 2021 4 tweets 3 min read
1/4 Do you want to learn computational social science (for free) and start research projects with scholars from many different fields? The Summer Institutes in Computational Social Science (sicss.io) will run in *THIRTY-ONE* locations around the world in 2022!!!! 2/4 Though many 2022 #SICSS locations hope to run in-person, some will be virtual institutes at a variety of exciting institutions as well. For a full list of sites, see sicss.io (where details about each institute will be posted in the very near future)
Aug 10, 2021 6 tweets 3 min read
YouTube's algorithm is *not* radicalizing people according to a study by leading scholars that examined 29 million YouTube viewing sessions and recently appeared in a prestigious peer reviewed journal: pnas.org/content/118/32… There is more research/work to be done (especially with experimental designs and on other platforms), but this is the most comprehensive and careful analyses I've yet seen by @homahmrd @aaronclauset @duncanjwatts @markusmobius @DavMicRot and Amir Ghasemian.
Mar 23, 2021 13 tweets 7 min read
1/ Do you feel hopeless about political polarization on social media? Introducing a new suite of apps, bots, and other tools that you can use to make this place less polarizing from our Duke Polarization Lab: polarizationlab.com/our-tools One of the biggest problems with social media is that it amplifies extremists and mutes moderates, leaving us all feeling more polarized than we really are. Our tools can help you avoid extremists and identify moderates with whom you might engage in more productive conversations.
Nov 9, 2020 9 tweets 4 min read
Many have expressed skepticism about calls for healing and #depolarization over the past few days. But what does the latest research indicate about the prospects for reconciliation? Let’s look at the #SocSciResearch...1/9 An experiment that asked Democrats and Republicans to discuss politics in person for just 15 minutes improved their attitudes towards each other by *70 percent* compared to a control group. See @m_levendusky ‘s forthcoming book “Our Common Bonds” 2/9
Sep 9, 2020 7 tweets 2 min read
1/n Are you a *complete beginner* in computational social science who wants to learn how to code? I'm happy to announce our new "coding bootcamp" video tutorials for the Summer Institutes in Computational Social Science: compsocialscience.github.io/summer-institu… 2/n I cover everything from setting up Rstudio to data cleaning (and "wrangling"), visualization, programming, modeling, communicating (w/Markdown, Rpres, and Shiny) as well as collaboration w/Github
Jun 26, 2020 6 tweets 1 min read
1/n How do computational social scientists land non-academic jobs? I asked this question to a panel of senior leaders in for-profit and non-profit companies on a wonderful webinar yesterday, and I’d like to share what I learned: 2/n The cadence of non-academic work is very different. Academics like to take their time developing the perfect research design, but in other settings, people need answers, fast. Also, many academics are used to working alone, whereas most non-academic work is team-based.
May 29, 2020 6 tweets 5 min read
1/n Want to learn about computational social science this summer? We are posting high-quality video lectures of ALL the material from the Summer Institutes in Computational Social Science (#SICSS2020) by @msalganik and I over the next few weeks! compsocialscience.github.io/summer-institu… ImageImage 2/n These videos cover a range of different topics from ethics to text analysis, digital field experiments, mass collaboration and many other topics (only the first few days of material is up there now, but more will be added soon)
Nov 25, 2019 18 tweets 4 min read
1/n Did Russian trolls actually influence the attitudes and behaviors of U.S. social media users? Our Polarization Lab’s new article suggests the answer might be “no” pnas.org/content/early/… 2. Many people think Russian trolls exerted strong influence upon U.S. social media users because of the sheer scale and apparent sophistication of their techniques. There is also anecdotal evidence that IRA accounts succeeded in inspiring American activists to attend rallies.
Dec 14, 2018 5 tweets 3 min read
1/5 Interested in learning how to collect and analyze social media data using topic models, text networks, or word2vec? I'm pleased to announce I am releasing an open source version of my "Text as Data" class from Duke's Data Science program: cbail.github.io/textasdata/Tex… 2/5 The course website (above) includes tutorials on a range of subjects with annotated R code. The class assumes basic knowledge of R and describes the techniques we use in the @polarization lab to run studies like this: pnas.org/content/115/37… and this: pnas.org/content/113/42…
Sep 8, 2018 9 tweets 4 min read
1/8 Did you know that Twitter is experimenting with new features that would expose its users to opposing political views? In this @nytimes piece, I describe why this idea could backfire based upon a large online experiment recently conducted by my lab: nyti.ms/2Nu45gL 2/8 We surveyed, 1,225 Republican and Democratic Twitter users about their views on social policies. One week later, we offered them money to follow a Twitter account which they were told would tweet 24 times each day for one month. They were not told what the bots would tweet.