Eli Tyre Profile picture
Dec 15 31 tweets 6 min read Twitter logo Read on Twitter
I'm less confident about the cost-effectiveness of AMF than I was a few years ago.

To my knowledge, no one who I know personally (whose epistemics I've evaluated) and who is not under distorting incentives, has double-checked that the oft-cited claims are correct.
(I'm ready for you all to come out of the woodwork and show me that I'm wrong about that : P

@slatestarcodex seems like he has maybe done the work here, and I've read enough of his argumentation to trust his epistemics.)
I know of a few folks who have done a more-or-less-deep dive into GiveWell's results.

At least two were unimpressed, but there's a selection effect where I'm more likely to hear from the people who think the results don't hold up than those who do.

But I haven't checked myself.
Again, my guess is that the claims are basically correct. But I don't think that I, personally, have that strong of a reason to think one way or the other.

Mainly a lot of people around me seem to think that malaria nets are among the most effective life-saving interventions.
And crucially, I now think it is a mistake to defer to GiveWell because other EAs say GiveWell is careful, and that I should trust EAs because they are the people who do their due diligence to maximize impact when giving to charity.

That's basically an epistemic pyramid-scheme.
Lest anyone think that's a strawman, this was basically my outlook in 2016.

I hadn't assessed GiveWell's research, but I confidently recommended that others give to GW recommended charities, because I trusted EA's milieu, because we were the thoughtful altruistic people.
Indeed, I still think that EAs are more thoughtful and more altruistic than the average person, by a lot!

But that only counts for so much if most of us are deferring to social cues and only a small fraction are doing difficult original thinking or verification.
My claim is not that EAs are particularly bad. It's that they give undue trust to the EA blob.
My impression from talking with lots of young EAs is that they have a very similar outlook towards EA as the one I had a few years ago.
They often talk about getting more resources "for EA", in a way that reifies the community, and takes for granted that the things it does are good, because we're the careful altruistic thinkers that can be trusted.

That EA having more power and more resources is obviously good.
They _might_ speculate about how EA could somehow do harm on-net for fun at parties, but my impression is that they don't really believe this is realistic in their bones.

(It's fun because it feels exotic and speculative. If it felt realistic, it would be scary instead.)
` Maybe EAs will make mistakes, but obviously EA having power is better than any of the alternatives! `
My probability of EA doing massive harm on net is probably my main crux with mainstream EAs.
The strongest example of this kind of "deferring to reified EA" is people donating to / campaigning for Carrick Flynn without knowing much about his character or his policies, on the argument that it was "good to have an EA in congress."

But I see it all over the place.
- Donating to EA funds
- Finding an "EA job" doing ops at an EA org
- Doing EA movement building
- Doing productivity / mental health / rationality / etc coaching for EAs.
All of these have impact-models that depend on a reified EA blob that does good things, such that non-specifically boosting the blob is good.
I feel pretty suspicious of people reasoning in that way, in general.
And this strategy seems a lot less like a good bet for positive impact if you think it's a live possibility that the EA blob will have negative impacts on net.

If so, then your own small boosting of EA is making the world worse, not better.
Image
I think many EAs clearly trust reified EA in a way that amounts to "We are the good group!"

I think they don't really find it credible that EA as a whole could really lead to disastrous outcomes...which is one way "We are the good group!" can feel, from the inside.
(A an intuition pump: consider a social justice activist thinking that of course some activists will fail or make mistakes but that it's ridiculous that the crusade for anti-racism as a whole could lead to WORSE outcomes for black people, on net.
Or the 19th century communist disbelieving that communism would lead to 7 million deaths.

If you get some foundational assumptions wrong, you can totally have an impact far into the negatives. Not just doing nothing, but causing the OPPOSITE of your intended effect.
This volley of reference class tennis doesn't prove anything, but it might give my reader an intuition for how my view differs from theirs.

Especially since I've seen many iterations of people doing stuff to help with AI risk that probably made things worse.)
Now, there's an important quantitative question of what fraction of EAs (however you draw the boundary) are reasoning by the "EA is good, therefore boosting EA is good" pattern.

That's hard for me to estimate because of selection effects in which people I end up talking with.
My GUESS is that more than half of the attendees of EAGs over the past 2 years basically defer to EA in this way?

And the more so for the more involved EAs. Though less so for the EA thought-leaders.

But that's just a guess.
Ultimately, I'm reporting my sense of the vibe from when I talk with EAs, the sense that decreases my interest in EA, even though I share their principles, and am actively trying to figure out how to make the world much better.

I haven't done a careful anthropological analysis.
This whole dynamic is ironically diagonalizing. The more EAs believe that "EA is the community that does careful due diligence" the less careful due diligence EAs tend to do.

The more EAs take for granted that EA is good and does good things, the less likely that is to be true.
I have basically no objection to AMF and to people donating to AMF.

But I DO object to the distributed organism that defers to GiveWell on the basis of social cues, out of a social trust which is founded on "EAs being the good people who do the research."
Of course, some other people might have done the legwork to assess a lot of these object-level questions. And from their epistemic vantage point, some of the EA-endorsed actions might be slam dunks.

That I haven't done my due diligence no GiveWell doesn't mean NO ONE has.
But I roll to disbelieve that this is true of the typical young EA enthusiastically trying to get more people involved.
@threadreaderapp @Twtextapp @unrollthread unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eli Tyre

Eli Tyre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpistemicHope

Dec 15
I would like to retract this tweet in particular, with apologies.

On reflection, it is somewhere between "highly misleading" and "just wrong".
I meant it as a pertinent example of a case where a bunch of locally good actions can globally produce a negative effect, without making any particular claim about the size of the negative effect, or that it completely overturns the first-order positive one.
But many people (reasonably) took me to mean that the negative effect dominates in this case...since that's almost literally what I wrote.
Read 17 tweets
Dec 8
The core pitch of EA was "with careful, but simple, first-order reasoning, and relatively small sacrifice, a normal person can do a huge amount of good."

That turns out to be mostly false. Simple first-order reasoning is inadequate to the task.
It MAYBE works in one-shot games. One-off wealth transfers from rich to poor increase utility. You can spend a few hundreds of dollars to buy some malaria nets in the third world and that does some good.
But you need pretty sophisticated correct philosophical / sociological reasoning to do the straightforward EA thing _at scale_.
Read 83 tweets
Dec 6
Is there a standard guide for consuming maximally-humane animal products?

I'm looking for something that gives operational instructions: eg "buy beef certified as [certification-standard] by [trustworthy org]. [This brand], which you can buy at [this store], counts."
I think it's often a hard sell to get people to go vegan. But it might be more politically feasible to push for people to insist on humanely sourced animal products, if they can afford them.

That's an easier sell if I can give people a guide that tells them exactly what to buy.
My understanding is that most of the certifications out there are bullshit.

But I'm hoping there are some that are reliable indicators of eg "this cow lived a natural, happy, healthy, life, in a cow-friendly environment, socializing with other cows, and then was slaughtered."
Read 7 tweets
Dec 5
@acesounderglass
@ben_r_hoffman
@HiFromMichaelV
@jessi_cata
@glenweyl
@sallytyre
Is anyone aware of effective policies for reversing the historical impacts of historic systemic racism, and especially resolving the root causes of racial economic (and social) injustice?
I'm thinking the fact that smoking weed as a black person (but not as a white person) is a often life ruining offense, or of straightforward institutional bigotry like redlining and racial covenants, or that the same house is assessed at a lower value if the owners are black.
Read 13 tweets
Dec 4
I've generally become less confident about which actions will turn out to be good in the end, and I reserve the right to reverse on this.

But provisionally, this take looks pretty good to me, and I would have disagreed at the time.
ChatGPT probably shortened the timeline, by driving hype and investment.

But it also, to my surprise, mobilized a bunch of energy and a shift towards society taking AGI seriously as problem to contend with.
It remains to be seen if there will be any policy wins on AI risk over the next 2 years. It could all come to naught or worse than naught.

But it currently seems like there's an opening, and people I know are pushing to use that opening to get helpful policies passed.
Read 15 tweets
Dec 2
Propaganda.

But propaganda taping into some real and worthy ideals.

Unfortunately I don't think crypto (mainly) makes people wealthier. We need to create underlying value for that.
Also, great acting in this ad!

It's interesting how much higher quality the story-telling is than almost every other ad I've seen.

Why is the floor so low?
(Oh right. Because most sellers don't care about the artistic value of their ads and are trying to do what they know is normal.)
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(