Sigal Samuel Profile picture
Senior Reporter at Vox. Former Religion Editor at The Atlantic. Author of the children's book OSNAT AND HER DOVE and the novel THE MYSTICS OF MILE END.

Sep 6, 2022, 18 tweets

Effective altruism’s most controversial idea is called “longtermism.”

It says we should focus on protecting FUTURE people (potentially more than present people).

It’s a deeply political idea, so the main question is: Who gets the POWER to decide?

🧵
vox.com/future-perfect…

The first thing to realize is that there isn't 1 longtermism. There's longtermisms. Think of this worldview as a train that can drop you off at different stations.

Effective altruists sometimes talk about this by asking each other: “Where do you get off the train to Crazy Town?”

I like to picture a rail line with 3 stations:
🚂weak longtermism
🚂strong longtermism
🚂galaxy-brain longtermism

Weak longtermism = “the long-term future matters more than we’re giving it credit for & we should do more to help it.”

Care about climate? This one's probably you

Strong longtermism = “the long-term future matters more than anything else, so it should be our top priority.”

Galaxy-brain longtermism = “the long-term future matters more than anything else, so we should take big risks to ensure not only that it exists, but that it’s utopian!”

Longtermism is already influencing powerful people, from politicians to billionaires (@elonmusk cites it...) so it really matters *which* version of longtermism gains currency. Weak longtermism is a commonsense view but there are serious objections to strong longtermism, like:

1⃣ It’s ludicrous to chase tiny probabilities of enormous payoffs. If you can save a million lives today or shave 0.0001% off the probability of human extinction, you should do the former, not the latter as strong longtermism's logic implies!

2⃣ We can’t reliably predict the effects of our actions in 1 year, never mind 1000 years, so it makes no sense to invest a lot of resources in trying to positively influence the far future. Acknowledging our cluelessness means limiting ourselves to the stuff we KNOW will do good.

3⃣ It’s downright unjust: People living in poverty today need our help NOW. If strong longtermists reallocate millions from present to future people, it harms present people by depriving them of funding for e.g. healthcare or housing. Those are arguably basic, inviolable rights.

Reading @willmacaskill's new longtermism book, I was struck by what he says on the last page: “How much should we in the present be willing to sacrifice for future generations? I don’t know the answer to this.”

But this is THE key question. It decides where we get off the train.

Last train stop: galaxy-brain longtermism. It says we should settle the stars. Not just can, but should, because we have a duty to catapult humanity out of a precarious earthbound adolescence into a flourishing interstellar adulthood.

Are you getting a whiff of Manifest Destiny?

.@willmacaskill doesn't endorse galaxy-brain longtermism: getting to a multi-planetary future may be important but doesn't trump all other moral constraints. But I asked him if that distinction is too subtle by half.

“Yeah, too subtle by half," he said, "maybe that’s accurate.”

I think the debate about #EffectiveAltruism and #longtermism has become horribly confused. Some of the most vociferous critics are conflating different “train stations.” They don’t seem to realize that weak longtermism ≠ strong longtermism ≠ galaxy-brain longtermism. But...

That's not really the critics's fault. Longtermism runs on a series of ideas that link together like train tracks. And when the tracks are laid down in a direction that leads to Crazy Town, that increases the risk that some travelers will head, well, all the way to Crazy Town.

I think there's a better way to lay down tracks to caring about the future — a way that doesn't run such a high risk of leading us to Crazy Town. We can acknowledge that there are multiple sources of moral value and gather diverse POVs on how to divvy up resources between them.

EA is very Global North & that's not just a problem on the level of racial diversity, it's a problem on the level of ideology. Intellectual insularity is bad for any movement, but it’s egregious for one that purports to represent the interests of all humans now & for all eternity

Effective altruism is Big Politics — it's dealing with questions about how to distribute all of humanity's resources. This shouldn't be up to a few powerful people to decide. Charting the future of humanity should be much more democratic.

As @CarlaZoeC told me: “I think EA has figured out how to have impact. They are still blind to the fact that whether or not that impact is positive or negative over the long term depends on politics. I don’t think they realize that in fact they are a political movement.”

I wrote this piece because EA/longtermism is doing politics on a global, even galactic scale — tons at stake! — yet the debate around it is still muddy. I tried to make it clearer here so we can critique the real thing, not a strawman. Please read & share! vox.com/future-perfect…

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling