Discover and read the best of Twitter Threads about #CSCW2021

Most recents (5)

Belatedly, I wanted to say a bit about what was discussed yesterday with the @sigchi Research Ethics Committee at #CSCW2021. What is the committee and what are folks in our community struggling with or thinking about when it comes to research ethics and processes?
The SIGCHI research ethics committee serves an advisory role on research ethics in the SIGCHI community. We can answer questions generally, but typically we come in during the review process to help reviewers who raise ethical issues. (We advise but do not make decisions.) (typical process) Outreach ...
The most common outcome when we weigh in on ethical issues that arise during paper review is that reviewers ask authors for clarifications or more information or reflection in their paper. Here is a list of some general topics that have come up in recent years. Potential risks to vulnerab...
Read 14 tweets
How can human-AI teams outperform both AI alone and human alone, i.e. achieving complementary performance? In our new paper presenting at #CSCW2021, we propose two directions: out-of-distribution examples and interactive explanations. Here’s a thread about these new perspectives: Abstract of the paper: Unde...
1/n First, prior work adopts an over-optimistic scenario for AI: test set follows the same distribution as training set (in-distribution). In practice, examples during testing may differ substantially from training, and AI performance can drop significantly (out-of-distribution). Image
2/n Thus, we propose experimental designs with both out-of-distribution examples and in-distribution examples in the test set. When AI fails in out-of-distribution examples, humans may be better at detecting problematic patterns in AI predictions and offer complementary insights. Image
Read 12 tweets
Platform bans of offensive influencers have been in the news a lot recently. While much conversation has focused on the ethics of deplatforming, the preceding questions about its effectiveness and what happens in its aftermath have remained under-explored.
To explore these questions, my team (@asbruckman, @Diyi_Yang and @ChristianBoyls1) and I examined 3 case studies of prominent deplatforming on Twitter - Alex Jones, Milo Yiannopoulos, and Owen Benjamin. We report on this work in an upcoming @ACM_CSCW paper #CSCW2021
Analyzing over 49M tweets and accounting for existing temporal trends, we found that deplatforming reduced not only the conversation about these influencers, but also the spread of many anti-social ideas and conspiracy theories (e.g., pizzagate, sandy hook).
Read 8 tweets
Can we map how literary genres are redefined by online book taggers and reviewers? 📚

In work #CSCW2021, we show how @LibraryThing reviewers work together using free-text tags to create a shifting folksonomy that powers many IRL libraries. Genres are blurry + context-dependent! Scatterplot comparing book overlap and user overlap for pairScatterplot showing book overlap by misclassification count
You can read the full paper with @mellymeldubs and @dmimno here, and I’ll be presenting this virtually at #CSCW2021: maria-antoniak.github.io/resources/2021…
LibraryThing is similar to Goodreads but is more independent + accessible. Where Goodreads throttles access to its reviews, LibraryThing shows all reviews to its users. @mellymeldubs and I write about this as an “algorithmic echo chamber” in our paper on the Goodreads classics.
Read 13 tweets
How do people collaborate with algorithms? In a new paper for #CSCW2021, Yiling Chen and I show that even when risk assessments improve people's predictions, they don't actually improve people's *decisions*. Additional details in thread 👇

Paper: benzevgreen.com/21-cscw/ Image
This finding challenges a key argument for adopting algorithms in government. Algorithms are adopted for their predictive accuracy, yet decisions require more than just predictions. If improving human predictions doesn't improve human decisions, then algorithms aren't beneficial.
Instead of improving human decisions, algorithms could generate unintended and unjust shifts in public policy without being subject to democratic deliberation or oversight.
Read 7 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!