Profile picture
Keiko @keikoinboston
, 65 tweets, 23 min read Read on Twitter
I have been on the Internet for ~25 years. This hasn’t been my experience at all. There is no blanket experience of the Internet for ppl of "marginalized race or gender”. Depends what sites you use, how you post, how you respond.

"The Internet of Garbage" excerpt pp. 22-23
2. Okay, this is going to be a thread. Don’t @ Jeong.

It’s looking like TIoG is a Gamergate exposé from the feminist perspective.

Jeong wants readers to know that the media has failed to talk about how women get swatted too.

Left: p. 26, right p. 27
3. Doxing is part of the conspiracy to “punish women for being visible on the Internet.”

“In the case of misogynists, women are [the garbage that needs to be taken out].”

p. 27
4. "The essential nature of the problem is tied to forms of violence against women like stalking, assault, and intimate partner violence.”

p. 29
5. If anyone would like to follow along while I live tweet my reading of “The Internet of Garbage” you can find it here.

6. I really think this is true of any target of harassment, regardless of if they're a member of a marginalized group. Context is everything.

p. 30
7. Though catastrophizing is a real thing. This is the best description of catastrophizing I’ve found:

"you turn commonplace negative events into nightmarish monsters."

samuelthomasdavies.com/book-summaries…
8. Quote from @DavidDBurnsMD’s "Feeling Good: The New Mood Therapy”.
amazon.com/Feeling-Good-N…

Source:
samuelthomasdavies.com/book-summaries…
9. Jeong compares the “spectrum of online harassment” with catcalling and assault which she describes as existing “on a spectrum of sexist behavior—that is, the consistent male entitlement to a woman’s attention that they receive…"

p. 30
10. This is not spelled out but I think Jeong’s definition of harassment involves:

1: individual to individual attacks (it is not possible for an individual to harass a group but it is possible for a group to harass an individual)

pp. 31-32
11.

2: target must be a member of a marginalized group (ie: women or POC) - exception for allies (ie: Israel Galvez on p. 26)

3: if target lacks emotional resilience, that impact on target must also be taken into consideration
12.

- Legal system should handle most extreme forms of harassment.

- Online platforms should be designed to “dampen harassing behavior” and shield “targets from harassing content.”

p. 32
13.

“It also means building user interfaces that impart a feeling of safety to targets. Code is never neutral and interfaces can signal all kinds of things to users.”

p. 32

(If you think my reading comprehension is off, tell me what your takeaway is.)
14.

“The important thing to take away is that simply deleting or filtering offending content is not the end goal. Deletion can be a form of discouragement towards harassers and safety for the harassed, but it’s only one form."

pp. 32-33
15. pp. 34-37 talks about content moderation that doesn’t involve deleting & banning. This section was influenced by The Virtues of Moderation by Prof. James Grimmelmann at Cornell Tech.

Abstract:
scholarship.law.cornell.edu/facpub/1486/

Pdf:
james.grimmelmann.net/files/articles…
16. Other content moderation options:
- Filtering
- Editing
- Annotation
- Amplification & diminution
17. Besides banning, other consequences for users:
- IP bans
- Suspension
- Accountability processes
18. That last one sounds a lot like re-education:

“An accountability process pulls a user aside, not only to put a check on their behavior, but also to rehabilitate them.”

en.wikipedia.org/wiki/Reeducati…

p. 36
19. The “Tribunal” in the game League of Legends is reminiscent of bias response teams that are increasingly common in higher ed and restorative justice which is often used to address complaints so that the offender is rehabilitated instead of removed.

p. 36
20. An example of a bias response team website:

MassArt Bias Response:
massart.edu/bias-response

Restorative justice:
en.wikipedia.org/wiki/Restorati…
21. “Harassers’ motivations are ill-understood. It may be that harassers are simply misguided people. It may also be that they are incurable sociopaths. (It may be both.)"

pp. 37-38
22. This 2016 paper (paywalled) found that 2/4 Dark Tetrad personality traits predict trolling behavior (psychopathy & sadism) but that trolling may be better explained by negative social reward. sciencedirect.com/science/articl…

Paper (free) on the Dark Tetrad:
researchgate.net/publication/28…
23. Read about the study here:
psychologytoday.com/us/blog/the-in…
24. This 2012 paper (paywalled) correlates the Dark Triad personality traits with bullying behavior. Strongest link with psychopathy, then Machiavellianism & narcissism.
sciencedirect.com/science/articl…
25. “But accountability process work because they not only give people a chance to have a genuine change of heart…”

“A user doesn’t have to have a real change of heart to decide to simply go along with the norms that are being enforced.”

p. 37

???
26. Better to set platform norms before problems arise through:
- Articulation (publishing clear rules)
- Positive reinforcement
- The aura of accountability (real names)

Also important to create user investment in the community so they “will police and enforce norms”.

p. 37-38
[🎶 Please hold… this thread will resume after I’ve slept. 39 pages remaining. 🎶]
25a. Argh.

* “But accountability processes work...”
27. pp. 40-42 detail Garcia v. Google which I hadn’t paid much attention to. It’s actually a really interesting case which ended up being heard en banc before the 9th Circuit which reversed the lower court’s opinion.
law.justia.com/cases/federal/…
en.wikipedia.org/wiki/Garcia_v.….
28. EFF’s coverage of Garcia v. Google:
eff.org/cases/garcia-v…

The case is interesting but I couldn’t figure out what it was doing in TIoG until I got to p. 42.
29. “Yet lurking beneath the thorny legal and doctrinal issues is the great paradigm shift of the present digital age, the rise of the conscious and affirmative belief that women should have, must have, some kind of legal recourse to threats online."

p. 42
30. Shouldn’t all people, regardless of gender, have “legal recourse to threats online”?

Jeong does agree that the lower court decision in Garcia v. Google was “wrongly decided".
31. pp. 43-45 is critical of the DMCA and Section 230 of the CDA which tech scholars, hackers and activists have long been critical of.
en.wikipedia.org/wiki/Digital_M…
en.wikipedia.org/wiki/Section_2…
32. Jeong uses revenge porn sites as an example of the “worst actors” who may be protected by Section 230. They may be prosecuted for other related offenses but enjoy the same protections that immunize sites “from legal liability for the posts of their users.”

p. 44-45
33. On p. 44 she defines revenge porn sites as those which “purportedly post user-submitted nude pictures of women without their consent,” making no mention of the fact that men can also be victims.

theconversation.com/the-picture-of…
34. Jeong cautions that changing the legislation won’t be easy because any change would “likely suffer from mission creep, making the exception bigger and bigger.”

p. 45
35. On p. 47 Jeong characterizes non-consensually uploaded nude photos and what happened to Garcia in the aftermath of the release of “Innocence of Muslims” as "hate crimes".

Reddit & Garcia "Fell into the same trap: They turned a hate crime into a copyright crime.”
36. This is a curious statement for someone with a JD (US law degree) to make. Neither of these are defined as hate crimes under US law.
en.wikipedia.org/wiki/Hate_crime
37. Jeong characterizes RIAA and MPAA lawsuits as “how one successfully manages to reach through a computer screen and punch someone else in the face."

p. 47

en.wikipedia.org/wiki/Recording…
en.wikipedia.org/wiki/Motion_Pi…
38. Jeong’s view of what online harassment is:

“Online harassment, amplified on axes of gender identity, race, and sexual orientation, is an issue of social oppression that is being sucked into a policy arena that was prepped and primed by the RIAA in the early 2000s."

p. 47
39. Jeong thinks current understanding of the Internet should be policed is based primarily on a copyright model, not on protecting “vulnerable Internet users” from “gendered harassment.”

p. 47
40. Jeong believes that “an anti-harassment strategy that models itself after Internet copyright enforcement is bound to fail.”

“Content removal is a game of whack-a-mole…” and Garcia was chasing a dream of being able to completely control her image on the Internet.”

p. 47-48
41. Jeong recognizes that it isn’t possible for anti-harassment efforts to focus solely on “deletion and removal” and that “architectural reconfiguration, filtering community management, norm-enforcement” must also be employed.

p. 47-48
42. Sorry - screenshot in previous tweet was from pp. 48-49.
[🎶 Please hold… this thread will resume after I’ve slept. 28 pages remaining. 🎶]
43. pp. 51-52 explain how 1A isn’t applicable to social media and publishing platforms because they are “private entities”, not the government.

en.wikipedia.org/wiki/First_Ame…
44. Jeong speculates that the reason here is so much "American constitutional jargon” in the terms of service of so many platforms “may even be a cultural carry-over from the birth of DARPAnet and Usenet.”

p. 51

(Any legal/tech people want to comment on this?)
45. pp. 53-54 give a very brief overview of the history of 1A doctrine, Jeong citing Oliver Wendell Holmes Jr’s dissent in Abrams v. US. ...
en.wikipedia.org/wiki/Abrams_v.…
46. … and talks about John Stuart Mill’s “marketplace of ideas” and Hannah Arendt’s discussion of the agora and the polis in The Human Condition.
en.wikipedia.org/wiki/Marketpla…
en.wikipedia.org/wiki/The_Human…
47. Jeong says these ideas are found in codes of conduct and terms of service for platforms founded in the US.

p. 54
48. pp. 55-57 talks about how social networking sites have created monopolies in both the US and the developing world.
49. In the US, it’s because that’s where your friends and family are and “where the conversation is happening. In the developing world, sites like Facebook use zero-rating to essentially be the Internet.
en.wikipedia.org/wiki/Zero-rati…

p. 55
p.
50. Social media can also create silos where people on different platforms may not be having the same conversations or seeing the same news.
51. Jeong cites the 2014 unrest in Ferguson, Missouri as an example. All the Ferguson news was happening on Twitter. Facebook was essential silent by comparison.
en.wikipedia.org/wiki/Ferguson_…

p. 56

medium.com/message/fergus…
52. All this raises questions about how platforms like Facebook should or shouldn’t censor content. Jeong cites an ACLU blog post that says: "In short, when it comes to the vast realm it oversees, Facebook is a government.”
aclu.org/blog/national-…

L: p. 56
R: ACLU blog post
Thank you! 🌺

Thread is still ongoing - 20 more pages to go!
53. Jeong calls out platforms that purport to be pro-free speech but instead in engage in moderation practices that are anti-free speech.

p. 58
59. Jeong calls out reddit in particular for Gawker-blocking policies put in place after journalist Adrien Chen doxxed a user on Gawker’s platform known as violentacrez.

p. 58

gawker.com/5950981/unmask…
60. Jeong points out that reddit operates on a model where “unpaid subreddit moderators” operate “their own little fiefdom where they enforce their own rules. Reddit’s supposed commitment to free speech is actually a punting of responsibility."

p. 58
61. Many platforms have had to play catch up with content moderation, after abusive behavior on their platforms has grown. “For many platforms, moderation is an afterthought, tacked on top of the technology.”

p. 59
62. Jeong believes that “Effective anti-harassment can make a freer marketplace of ideas, rather than inhibiting it.”

p. 59
63. Jeong quotes Vijaya Gadde from Twitter: “Freedom of expression means little as our underlying philosophy if we continue to allow voices to be silenced because they are afraid to speak up.”

Reddit also changed their policy to curb harassment.

redditblog.com/2015/05/14/pro…

p. 59
64. “Promoting user safety doesn’t mean mass censorship is the answer.”

Jeong draws a line between “small intimate communities” and “large-scale platforms” and says they have “different kinds of obligations to their users."

p. 59
65. Large platforms “are beginning to resemble public squares of discussion and debate…"

“Platforms like Facebook, Twitter, or Wikipedia might not have a legal obligation to protect free speech, but failure to do so would have serious consequences for culture itself.”

p. 59
66. Jeong believes that “Communities that do purport to be for everyone have an obligation to cultivate a community of inclusive values simply because they should put their money where their mouths are.”

p. 60
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Keiko
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!