This thread is for live-tweeting the ethics session at #SIGCSE2021. FIVE papers at @SIGCSE_TS this year about ethics in computer science education! 🧵
First up: "How Students in Computing-Related Majors Distinguish Social Implications of Technology" by Diandra Prioleau et al. at University of Florida.
They presented students with scenarios about AI technology (e.g. recidivism algorithms)...
... and found that their participants could spot social implications, but frequently missed issues of systemic discrimination. But surprisingly: About half of students had never heard of these issues, which points to a gap in computing curriculum. dl.acm.org/doi/10.1145/34…
Yay! When asked this question, the authors said that based on their findings they would also suggest in-situ ethics learning, across the curriculum rather than just in a single standalone ethics course. :)
Next up: "Computing Ethics Narratives: Teaching Computing Ethics and the Impact of Predictive Algorithms" by Beleicia Bullock et al.
As part of the Computing Ethics Narratives (CEN) project, they deployed a module about predictive policing in an algorithms course.
Paraphrased: "I don't think that a judge needs to learn how to code, but they do need to understand what the algorithm is doing"... as an argument for K-12 tech ethics education!
Also: "We're not advocating for fear, we're advocating for responsibility."
Next up: @Ben_Rydal presenting on "Using Role-Play to Scale the Integration of Ethics Across the Computer Science Curriculum" dl.acm.org/doi/10.1145/34…
Their roleplaying games are available for anyone to use. One is about self-driving buses, and one is about college admissions algorithms. I have used the first in classes before, and it went very well!
They also noted some inspiration in the paper I was involved in about ethics integration in an HCI class, in the suggestion for "perspectival CS" (ability to identify multiple perspectives)... which I had totally forgotten was in that paper. :) mwskirpan.com/files/Ethics_E…
Last paper: "Deep Tech Ethics: An Approach to Teaching Social Justice in Computer Science" by Rodrigo Ferreira & Moshe Vardi
They redesigned a CS ethics class to push on "deeper" questions closely related to present issues of social justice and power. dl.acm.org/doi/10.1145/34…
I really support a focus on social justice in thinking about teaching ethics. I really struggle with the word "ethics" and what it captures and doesn't. I think it's useful shorthand, but I worry that some might interpret it too narrowly.
And a final thought from an audience question from a student: so how do we actually implement this in the real world? Disconnect between what we're learning in class and the power we have in industry.
💯 This is a huge challenge we need to be thinking about.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
As you know, I am a fan of @tiktok_us these days, but I need to put them on blast for a bad design choice. Folks interested in content moderation/platform safety, buckle up. This is a story about bad people exploiting a loophole for harassment. We can learn from this. [Thread 🧵]
TikTok has a "block" feature that works similarly to Twitter. If you block someone, you can't see them and they can't see you. This includes comments.
So now we have A (person being harassed) and B (awful person who thinks it's fun to e.g. leave death threats in comments)...
B has figured out that they can comment on A's post and then immediately block A, which then means that A can't see that comment - and in fact doesn't even know it's there since it doesn't show up in their notifications.
Some thoughts on why this news - the potential for next gen watches (both Samsung Galaxy and Apple) to provide blood glucose readings - could be game-changing (not so much for #T1D folks but for everyone else). macrumors.com/2021/03/05/app…
This tech would almost certainly not be an improvement over existing continuous glucose monitors like what I use, but (I think?) it's rare for people with Type 2 to have insurance coverage for CGMs, especially if you're not on insulin and don't have to worry about lows.
The beauty of continuous monitoring over finger sticks is that you can get DATA. Unless you waste a lot of (expensive!) test strips to try to experiment, you're not going to know e.g. exactly when and how much your blood sugar spikes after meals.
Ever thought about how messed up it is from a harm vs benefit perspective that copyright infringement is more heavily moderated/enforced than, say, hate speech and harassment? I was reminded of this by re-listening to this @ThisAmerLife episode. [Thread🧵] thisamericanlife.org/670/beware-the…
The second act is the story of Lenny Pozner, the father of a Sandy Hook victim, who was harassed, threatened, and stalked by Alex Jones fueled conspiracy theorists accusing him of being a "crisis actor." And one tactic was making cruel memes out of photographs of his son.
And after trying to report content and get things taken down for lies and harassment, he finally realized that his best course of action was reporting copyright violations since he owned the photographs which were e.g. used in a YouTube video.
Not that I was *surprised* to see this study about predicting "political orientation," but since I've been talking about the "gaydar" (sigh) algorithm from the same researcher for a while now, here's some reflection. nature.com/articles/s4159…
Given criticism of the previous paper (which if you're not familiar is here: psyarxiv.com/hv28a/ ) I was genuinely expecting to see an ethical considerations section by the end of this paper (since that criticism pretty much constructed it exactly!). There is not one.
There is a lengthy "author notes" document linked to from the article that includes FAQs (like "physiognomy????") and twice warns to not "shoot the messenger" so I guess that's the ethics statement.
Hm. I wonder what happens when a community moves off a platform because accounts are getting banned for reasons that conflict with the values of that community?
Or: I'm not saying Trump supporters have a lot in common with fanfiction writers, but remember LiveJournal? [Thread]
In 2007, LiveJournal suspended a bunch of accounts in an attempt to remove certain kinds of objectionable content, and this ended up sweeping up a lot of fanfiction and fan art accounts/communities. People were Not Happy. fanlore.org/wiki/Strikethr…
This policy change by LiveJournal was directly (if of course only partially) responsible for the conceptualization and creation of Archive of Our Own. And the rallying cry was: own the servers!!! cmci.colorado.edu/~cafi5706/CHI2…
A thread on filter bubbles, confirmation bias, design against misinformation, and social media content policy. Or: how can people really think that the U.S. election was rigged, and is it social media's fault. 🧵
If you are reading this tweet, it is possible that you literally don't know a single person who voted for Donald Trump. Meanwhile, I know a couple of people who likely literally don't know a single person who DIDN'T vote for Donald Trump, besides me.
It's not like this is new - 30 years ago the same might be true just because all your friends live in your local community - but the internet makes us FEEL like we KNOW so many more people, and that we have a broader view of the world.