These few past days have given me such a strong #Jan25 energy. Municipal cops, Egyptian state security -- same gassed up, unbridled state-sanctioned violence.
If there are counter protesting camel attacks this week I'm going to go buy a lotto ticket
Seriously, though. If there are parallels, I fully expect a more organized white supremacist counter protest this week. It may have been small pockets on Friday but I'm afraid some real ugliness is about to go down.
*out of uniform white supremacists, to be clear
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This was probably one of the last teams at a big tech/social company that had the ear of product and policy, that wasn't dismantled at the whim of a whiny white tech boy. 1/
They put out important self-critical research, including work that showed the conservative amplification bias of the platform, and took a user-driven concern around cropping bias and made a contest out of it. 2/
If you know @ruchowdh, you know that she is serious about building teams and products that will have real impact in the world. That means finding people with both technical and social expertise and empowering them to do good work. 3/
I appreciate a lot about this piece by @emilymbender, and I'm glad she was gracious enough to spend time debunking what seemed like fluff from the NYT. Read it in full, but I wanted to point out a few thoughts:
Emily does a great job shifting the framing of this piece and fundamentally challenging the technodeterminist terrain that it's on. The author boxes her critique into one of "we need to teach machines ethics" rather than the broader critique of organizational power and reach.
The second thing: we seem to be entering into a world of "access journalism, but for tech bros" if we weren't already there. It's a dangerous game, and one you'd expect to see more tech press challenge. But... 🙃
I followed Lilly's lead on this and stepped down from participating in this conference. I fully echo her sentiment and well-thought out thread, with a few notes of my own. 1/
First, _funding matters_ in academic venues. Even though it may be a small act, withholding labor can be akin to withholding legitimacy to those funders. In the "AI ethics" space, as with much of AI, money is pouring out right and left to "solve" ethics. 2/
as @mer__edith + I said "[w]ithout independent, critical research that centers the perspectives + experiences of those who bear the harms of this tech, our ability to understand + contest the overhyped claims made by industry is significantly hampered" 3/
It turns out the Ethical AI team was the last to know about a massive reorganization, which was prompted by our advocacy. This was not communicated with us at all, despite promises that it would be.
Nothing about what we asked for has been addressed here.
* Samy Bengio is no longer in our reporting chain.
* An apology has not been offered to Timnit by Jeff Dean or Megan Kacholia.
* Our input for such an organization was solicited, but then decided upon behind closed doors.
This is nothing short of a betrayal.
We were told to trust the process, trust in decision-makers like Marian Croak to look out for our best interests. But these decisions were made behind our backs.
A thing to pay attention to in @sundarpichai's non-apology is this bit:
"One of the best aspects of Google’s engineering culture is our sincere desire to understand where things go wrong and how we can improve."
Google has the notion of a "blameless" postmortem, the idea that if a system breaks, then folks sit down and write up what went wrong, without anyone to blame.
This was brought up by a higher up in a prior meeting as well.
But the idea that HR and @timnitgebru's firing operate like engineering systems (which are already social systems, but let's bracket that) shows how brittle this analogy goes.
There is blame to go around, and we know where to put it.