Profile picture
Smerity @Smerity
, 25 tweets, 8 min read Read on Twitter
I've been meaning to write about this for some time but I rarely write atm and this is a depressing topic. Anger helps me write however and boy am I angry. Sooooo, I know some want to reclaim/recover r/MachineLearning (r/ML) but I don't think that's even feasible. Tweetstorm:
Underlying broader @reddit community problems for r/ML (and other r/...):
- Anonymity allows incivility
- Brigades can easily "launch" from other subreddits (i.e. "We hate women" subreddits)
- r/Futurism and other "science (fiction)" subreddits leak in meaning results are hyped
Now, to my extreme straw breaking... I knew r/ML had problems but primarily just left the community, maybe occasionally seeing it on my @reddit feed. Then someone posted my article on bias in the ML community to r/ML. It wasn't a hard hitting article.
smerity.com/articles/2017/…
The article had suggestions I thought many could get behind or at least had suggestions I hoped may start a fruitful discussion. Don't discount someone as they're <minority>. Have a code of conduct. Don't go for controversial jokes when easily avoided. It wasn't insane stuff. Honestly, I'm not the right person to talk about any of this in detail. What I simply want to say is that this is a community issue. It won't be solved by one champion, no matter how brilliant they might be. We all need to play a part in shifting our attitudes and behaviour.
I knew r/ML was bad but holy hell I didn't know the shitshow my article would produce. The worst comments are not there now but (foreshadowing spoiler) that wasn't as the mods responded well. Some commenters were quite sane, but others...
reddit.com/r/MachineLearn…
Of comments left after the mods "cleaned" the post:
I was accused of being an SJW (social justice warrior), that "fetishizing diversity diminish(es) the contribution", that I was a #genderneutraldude, and "Everyone knows STEM is a cesspool of intolerance and bigotry /s".
I responded to some of the saner posts but mostly just gave up on r/ML again. I did look in every hour or two to see how the dumpster fire had evolved. Then, the post disappeared. Wait, what? I asked the mods. They said it was removed due to "gradual escalation".
I replied noting that their logic wasn't sound. They replied:
"Feel free to resubmit ... but be prepared to manage/respond to the inevitable comments which will appear. If the thread escalates, we will have to remove it again."
Honestly, I didn't care. r/ML was dead to me anyway.
Let's take a short but really important aside - why did I decide to write this article? The strongest motivator was that I had _multiple_ female friends tell me about horrific experiences they had to deal with in the community. Academia, business, conferences, you name it.
They confided in me stories were far beyond "just" creepy sexual/dismissive behaviour - many had flat out sexual assault. I had somehow become part of the whisper network and realized how horrifically common these situations were.
en.wikipedia.org/wiki/Whisper_n…
I wanted to fucking throw something at a wall. These were good people in trauma. They made stunning contributions to our field. They were treated horrifically and then had to go hide in the shadows to share their pain and their warnings. This isn't fucking right.
Unfortunately, many of the women weren't able to speak out. Their reasons go from nuanced to simple yet insurmountable. They've had their career trajectories and personal lives changed due to these events. Yet the perpetrators generally got off with (at best) a warning.
So, back to the main thread, _that's_ why I wrote the article. Here's a simple set of fucking guidelines that everyone should see as sane that might help prevent my friends, contributors to our community, from being sexually assaulted. Sounds simple enough? Fuck.
Now, back to the story: r/ML was dead to me. Fuck that place.
A day later however @KLdivergence, a researcher who does amazing work in regards to fairness/accountability/transparency in ML, posted an article detailing her assault in academia + conferences.
medium.com/@kristianlum/s…
I knew @KLdivergence from a talk she gave on predictive policing at @stitchfix_algo where she had quickly convinced me she was a champion for good in both society and science ^_^ I followed her work on Twitter where she became another cool person in my extended science circle.
The second I saw her article I immediately felt sick about what might happen on r/ML. My article was bland the broader community r/ML and Reddit community rained hellfire on it. I gave up on it. Fine. I had that option. She didn't.
This article was @KLdivergence taking an insanely brave stance. She was able to speak out and she did. This wasn't a whisper, this was a fucking shout. This article spoke for every one of my friends who were stuck in the shadows. This article spoke towards fighting all of it.
I replied again to the r/ML moderators as this was beyond fucking important.
"We're failing our colleagues and friends. We're failing them."
"I will respond to every comment and every troll to make sure [her post] isn't removed arbitrarily."
So there I was, sitting from like midnight to 4am in the morning, replying to every comment I could so that the mods wouldn't be able to use their previously stated "logic" to remove it.
reddit.com/r/MachineLearn…
Luckily that thread ended up far better than my earlier article's - potentially as I was spamming the mods and commenters, potentially as some of the good members of the community were vocal - but none of this should be required.
The r/ML is toxic by default, both fundamentally (see first tweet) and on a personal level. We don't need to worry about rehabilitating that community, we need to worry about rehabilitating our broader community. These shadowed assaults still happen.
There are labs still manned by misogynistic assholes who seemingly enjoy inflicting torment. There are people in our community who have had definitive evidence against them yet were allowed to change their jobs quietly under company protection so as to avoid commotion.
First, these people deserve our protection as we all deserve it as humans. That's the first fucking steadfast rule. Second, many of them have made amazing contributions to our community _through_ this adversity.
Imagine what they could do if they weren't fucking fighting this trauma and fear every day and if they didn't have to change companies/PhD supervisors/managers/fields as it was so toxic there that they _had_ to escape.
To paraphrase the end of my original article, what I simply want to say is that this is a community issue. All our communities. It won't be solved by one champion, no matter how brilliant they might be. Play your part, whatever that might be.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Smerity
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!