NYU's Center for Social Media and Politics Profile picture
We work to strengthen democracy by conducting rigorous research, advancing evidence-based public policy, and training the next generation of scholars.
nadezhda - @nadezhda04@mastodon.social Profile picture 1 subscribed
Sep 1, 2022 10 tweets 5 min read
🚨 We’ve got a new @CSMaP_NYU paper in @journalsafetech 🚨

Fraud & conspiracy narratives proliferated on social media around the 2020 election. Our analysis finds YouTube was more likely to recommend fraud videos to users already skeptical about the legitimacy of the election tsjournal.org/index.php/jots…

There's growing scholarly consensus that social media algorithms have little influence on online echo chambers, in which users only see content reaffirming pre-existing views. But what if that content is undermining democratic confidence?
Dec 27, 2021 4 tweets 2 min read
#5 in our 2021 year in review: "Cracking Open the News Feed: Exploring What U.S. Facebook Users See and Share with Large-Scale Platform Data," published in @journalqd by @andyguess @aslett_kevin @j_a_tucker @RichBonneauNYU @Jonathan_Nagler 1/

journalqd.org/article/view/2… As the conversation about how misinformation spreads on social media continues, CSMaP researchers analyzed newly available data to discover what kinds of news articles Americans see and share on Facebook. 2/
Dec 26, 2021 4 tweets 2 min read
#4 in our 2021 year in review: "YouTube Recommendations and Effects on Sharing Across Online Social Platforms," published in Proceedings of the ACM on Human-Computer Interaction @ACMDL by @codybuntain @RichBonneauNYU @Jonathan_Nagler @j_a_tucker 1/

dl.acm.org/doi/10.1145/34… In January 2019, YouTube began excluding potentially harmful content from video recommendations while allowing the videos to remain on the platform. The goal was to reduce YouTube’s role in propagating such content. Did it work? 2/
Dec 10, 2021 9 tweets 5 min read
Misinformation spreads rapidly online. In response, Facebook & Twitter have suggested using ordinary users as fact checkers. Would this work? In @techpolicypress, we examine two approaches to test crowdsourced fact-checking. 1/

techpolicy.press/a-modest-ox-ex… In September, @_JenAllen and @DG_Rand published a study in @ScienceAdvances finding crowds can match professional fact-checkers. 2/

science.org/doi/10.1126/sc… Image
Oct 29, 2021 11 tweets 5 min read
Misinformation spreads rapidly online. In response, Facebook & @Twitter have suggested using ordinary users as fact checkers. But our new #OpenAccess article, in the inaugural issue of @journalsafetech, finds this is likely not a viable solution 1/ tsjournal.org/index.php/jots… Image Ordinary users -- and machine learning models based on information from those users -- cannot effectively identify false and misleading news in real time, compared to professional fact checkers, according to our experiment. 2/
Oct 6, 2021 10 tweets 6 min read
A lot to unpack from today’s Facebook whistleblower Senate hearing. Here’s some interesting storylines and insightful commentary we’ve seen: 🧵 1/ Many have noted this was one of the most focused and productive Big Tech hearing they've seen. Perhaps lawmakers are ready to cross the aisle and work together on meaningful regulation?
Oct 5, 2021 8 tweets 4 min read
Platforms have troves of research studying their societal impact. The recent FB revelations, and today's whistleblower hearing, show why it's critical for govt to open that data to outside researchers, @j_a_tucker & @Jonathan_Nagler write in @NYDailyNews

nydailynews.com/opinion/ny-ope… At @CSMaP_NYU, data is the foundation of everything we study. Often, the data will tell us something different than the anecdotal evidence circulating in the media and online.