A thread on filter bubbles, confirmation bias, design against misinformation, and social media content policy. Or: how can people really think that the U.S. election was rigged, and is it social media's fault. 🧵
If you are reading this tweet, it is possible that you literally don't know a single person who voted for Donald Trump. Meanwhile, I know a couple of people who likely literally don't know a single person who DIDN'T vote for Donald Trump, besides me.
It's not like this is new - 30 years ago the same might be true just because all your friends live in your local community - but the internet makes us FEEL like we KNOW so many more people, and that we have a broader view of the world.
"I see thousands of people posting on Facebook every day and not a single one of them voted for Joe Biden." It might be easy to extrapolate from that that the election results can't be real, because... seriously, who are these people who voted for Biden?! You've never seen them!
And the fact that so many people say they get their news from social media like Facebook isn't an ALGORITHM problem. It's because PEOPLE are choosing to have their news curated for them by the people and groups they choose to follow.
And this isn't a conservative vs liberal thing. Again, many people were also completely shocked by how close the election was because they see thousands of people posting on Facebook who all voted for Biden. Filter bubbles all the way down.
But when we're talking about the incredible impact of confirmation bias - people believe things that reinforce what they already think because who wants to be wrong??? - it's even more impactful when everyone around you ALSO believes it. You can't all be wrong!
So do you really think that Twitter labeling a tweet as false information will make someone say "oh, well if Twitter says that, it must be true"? Because meanwhile, everyone you follow is telling you that you're right. Not that you're wrong - or worse, that you're stupid.
I think that this kind of labeling can absolutely make a difference for smaller things that you may not have known were true or false, but if you've made up your mind about the election, or climate change, or whatever, I find it unlikely Twitter's label will make you think twice.
So this brings me to a couple of opinions: (1) Sure, social media is a big part of what's happening here, though I think that the much harder problems are about people rather than algorithms. How do you get people to believe things that prove them wrong?
(2) When content, including misinformation, is dangerous enough, a label is insufficient. If you're not going to be able to change people's minds, then the only option is to reduce the spread of that content such that it doesn't become further confirmation of an idea.
Anyway, this is just my current stream-of-consciousness thoughts on the matter, also influenced by my own interactions with some very conservative acquaintances who believe the election was fraudulent.
And it's a tangle of issues that create a perfect storm: some are about social media and some aren't.
My concerns are actually more about the people that you surround yourself with than news sources. I've just heard "everyone I know voted for Trump so there's no way he didn't win" a LOT lately.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Hm. I wonder what happens when a community moves off a platform because accounts are getting banned for reasons that conflict with the values of that community?
Or: I'm not saying Trump supporters have a lot in common with fanfiction writers, but remember LiveJournal? [Thread]
In 2007, LiveJournal suspended a bunch of accounts in an attempt to remove certain kinds of objectionable content, and this ended up sweeping up a lot of fanfiction and fan art accounts/communities. People were Not Happy. fanlore.org/wiki/Strikethr…
This policy change by LiveJournal was directly (if of course only partially) responsible for the conceptualization and creation of Archive of Our Own. And the rallying cry was: own the servers!!! cmci.colorado.edu/~cafi5706/CHI2…
In a few hours (evening for me, morning in India!) I'm giving a keynote for the COMPUTE conference on integrating ethics into computer science education. Including some links in this thread to papers and other things I will reference in that talk! Perfect for #CSEdWeek2020. :)
First: Why integrate ethics into technical CS classes? It's one way to change the culture towards recognizing that ethics is an integral part of the practice of computing, and not a specialization. howwegettonext.com/what-our-tech-…
Someone on TikTok asked if I could recommend books about tech ethics and I have never been so hyped to create a piece of content in my life. vm.tiktok.com/ZMJV3WPj8/
Obviously this list had to be visual so if there are obvious omissions it's probably because my copy of the book is trapped in my office I haven't been to since April or I've loaned it to one of my students. :)
Update on the bonkers omegaverse copyright lawsuit- a bogus DMCA claim for @thelindsayellis's video about bogus DMCA claims. AMAZING EXAMPLE of a complete misunderstanding of fair use. Let's talk about bad faith takedowns & what fair use protects! [Thread]
To briefly summarize the topic of Lindsay's original video:
Author sends DMCA takedowns for another author's books based on a claim of copyright infringement for worldbuilding concepts that originally came out of fanfiction. Gets more bonkers from there.
Following Lindsay's video, she immediately heard from Author's lawyer, with claims of copyright infringement and defamation. re: copyright infringement, the video includes about 400 words of Author's book. (Heavily bleeped since, you know... it's werewolf erotica.)
I'm alarmed by this exact issue, and here's a related one I've thought a lot about:
All datasets that curate "public" data (e.g., photographs or social media posts) create secondary archives of content that otherwise the original content creators would have control over. [Thread]
There are many reasons why you might want to delete content that you originally shared publicly, and why that content still being used by others (even scientists) might be harmful. One example that comes to mind is someone who has been through gender transition.
I also wrote about a speculative example in this design fiction about research ethics--in which there is a curated dataset of "last words" of deceased life-loggers, then used by trolls to harass their surviving loved ones. cmci.colorado.edu/~cafi5706/grou…
As we speak, the short video I made for my #CSCW2020 paper with @BriannaDym is playing "at" the conference! "Moving Across Lands: Online Platform Migration in Fandom Communities." Here is a longer version that was targeted at a general audience! #CSCW2020
And if you missed my #CSCW2020 session but want to see the 5-minute talk about online platform migration that is focused specifically on takeaways for CSCW researchers, here is that presentation!
Here are the social computing research-related takeaways from "Moving Across Lands: Online Platform Migration in Fandom
Communities" (link: cmci.colorado.edu/~cafi5706/CSCW…):