Remmelt Ellen Profile picture
Feb 8 39 tweets 20 min read
If you react to @timnitGebru and @xriskology that their descriptions of effective altruism are stereotyped and unreasonable, consider:

None of this is new.

———

In 2019, @glenweyl criticised EA in an interview with @80000Hours and went on to post: radicalxchange.org/media/blog/201…
In 2021, Scott Alexander wrote “Contra Weyl on Technocracy” and what followed was a series of debates where everyone seemed to talk past each other and left feeling smug.

Track back some of those posts here:
google.com/search?q=glen+…
Weyl: Here is my holistic emotional sense of what’s going wrong with these EAs who are invading spaces in the Bay Area, and I want them to stop.

EAs: Politics alert. Such imprecise and intense judgements. EAs donate to GiveDirectly too – did you consider that?

Sounds familiar?
^— my subjective paraphrase, ofc
I came in a few months later, willy-nilly, unaware of all the intense debates that came before:
forum.effectivealtruism.org/posts/LJwGdex4…
I had dug deep into possible blindspots of the EA community.

Then I noticed @juliagalef and @VitalikButerin misinterpret @glenweyl’s concerns in an interview.

I thought, ‘Hey, this does not match up with what I worked out. Let write back to try and bridge these perspectives.’
Their misinterpretations (in my view) were of what @glenweyl said in the 80K interview.

Excerpts: From 80,000 Hours interview transcript:  https://80000hours.
Glen and I did a call, from which I learned a bunch.

Then long email exchanges with Rob Wiblin (who interviewed Glen) and then Scott Alexander (discovering his “contra” debates). Both kindly took time to engage, and dismissed Glen as ‘a bit all over the place’ and ‘haranguing’.
Over the months after, I got clear about common narrow representations EAs had of “AGI” risks, and what they overlooked about tech development.

I started emailing my thoughts to open-minded AI Safety people after 1-on-1s.

First screenshot summarises my learnings from @glenweyl: But three ways I think he's tracking legit concerns: 1. ManyOn learning biases creeping into 'aligned' designs: Our commRegarding OpenAI, I worry roughly that they  a. Resolve pland. Speed up development of more generally capable algorithms
(Then I talked with @ForrestLandry19, probed his arguments why this AGI thing cannot be ‘aligned’ to stay safe in the first place, and ever since have been trying in vain to find people in #AISafety willing to consider that what they are trying to do is impossible. Good times.)
Glen and a co-author drafted a media article that started by discussing Metz and Scott Alexander controversy, and then drew a bunch of links between EA and Silicon Valley.

The sociological descriptions resonated for me but were also very unnuanced. I proofread and left comments:
Here are ways I changed my mind about that feedback:
1. Even attempts of “subtle” analysis of IQ across populations is fraught with motivated biases of self-identified intelligentsia using measurements against minorities. @xriskology linked to amp.theguardian.com/news/2018/mar/…
Put another way, there is not only a concern about the construct validity of using IQ questions to measure across populations that change a lot of time (see Flynn Effect).
There is a concern that even *if* there would be a minor effect remaining that somehow could be disentangled from what migrants did not have (conventional education) and what marginalised minorities were exposed to, that it will be *used* by people in power to justify decisions. My reply to someone on a draft that I might have shared on L
1. (continued)
So concerns about how EAs prize “intelligence” (as gauged by them) above other qualities and virtues in terms of deciding whose recommendations to follow and take seriously are real. So are thought experiments about population ethics, and machines replacing humans
It is concerning for people in power to confidently reason how other humans are like and what would be good for them, with little capacity to listen – except if input is represented in the way judged to make sense or “high-signal” by the standards/prestige marks of the community.
2. I got in a weird position of having tried my best at the time to prevent unconstructive mudfights with EA, to now wondering whether intense pressure was needed after all.

Glen took the feedback and wrote a more nuanced post of his concerns with EA: radicalxchange.org/media/blog/why…
2. (continued)
Since that Oct 2021 post, what changed about the community at its core?
Nothing (or to be precise: very little)

Instead, despite repeated warnings by @CarlaZoeC, @LukaKemp, me, others – the Centre for Effective Altruism and 80K scaled up book and podcast outreach.
2. (continued)
Glen humbly bowed out, telling me that he was not in a good (emotional) position to contribute to critical reforms in this world, and that what was needed was someone who could empathise and communicate effectively with the EA/rationality audience.
2. (continued)
Scott Alexander criticises criticism of EA as being overly general and unsubstantiated (my paraphrase):
astralcodexten.substack.com/p/criticism-of…
Not knowing I had tried to spare him from another thrashing media controversy, Scott uncharitably lifts out and interprets excerpts from my blindspots post.

(Scott then disclaims that he thinks he is being terribly unfair here, maybe overlooking his actual social influence here)
I notice Scott missed the point, which is that I was trying to bridge to other people’s perspectives, and see where they complement ours – rather than arbitrate which perspective is the “superior” one.
And that Scott used the fact that intelligent (high-IQ?) people tend to think more individualistically as a (possible) justification for society to think more individualistically.

Considering that IQ scores correlate with cognitive decoupling, that argument seemed circular.
So that’s where we are now.

Other leaders from other communities stepped in to criticise effective altruism, and it seems we are in the same cycle again.

Also just noticed that 80,000 Hours paid for Google Ads:
Those ads link to an 80,000 Hours article, titled “Misconceptions about effective altruism”.
80000hours.org/2020/08/miscon…

How convenient.
I just tried other prompts, and mostly confused about what the pattern is, and whether 80K took down the Google Ad in the meantime (or if it tracks my IP address incognito? don’t know).
My guess is that someone at 80K saw the @ mention above in this thread, and reacted fast to remove the Google ads – the professional response team they are.
They’re probably viewing this thread right now. Hi :)
A few others were talking with Glen Weyl at the time, and also gave input that contributed to his decision not to publish the article.

See comment thread with Devin Kalish at astralcodexten.substack.com/p/criticism-of… Comment on post titled Criticism of Criticism of Criticism. Comment on post titled Criticism of Criticism of Criticism. Comment on post titled Criticism of Criticism of Criticism.
To be precisely accurate here, @slatestarcodex and I’s emails were *ridiculously long*. They make for interesting reading (incl. places where Scott pointed out where I overextended in my claims).

My exchange with Rob Wiblin covered shorter friendly back-and-forth disagreements.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Remmelt Ellen

Remmelt Ellen Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @RemmeltE

Feb 9
Just spotted this August post on the EA Forum that had -8 votes on it (after looking through @chrisscross’s comments, who seemed like an interesting fellow).

I can totally see how people reacted negatively yet, holistically, the post is right on point:
forum.effectivealtruism.org/posts/QRaf9iWv…
See this list of common EA inclinations (I just noticed some cite my original blindspots post –confirmation bias beware). The elements of EAs epistem...
How I imagine someone involved in EA reading through:
- “EA… severely harmful worldview”— Ouch! I don’t like.

- “Polycrisis”— That sounds so vague. Where is the substance?

- “Focus on analytic logic, rationality and the western scientific method”— So, what’s wrong with that?
Read 21 tweets
Feb 7
I like a few people involved with @collect_intel but have fundamental concerns:
1. Institutions/mechanisms first focus.

2. Putting unreliable blackbox models in between human interactions.

3. Objectifying preferences.

4. Alliances with two unscrupulous power-seeking actors.
(whom I’ll not name to not be in the crossfire)
I’m going to be honest. The more I read through this whitepaper, the more concerning I found it.
Read 20 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(