The first thing to realize is that there isn't 1 longtermism. There's longtermisms. Think of this worldview as a train that can drop you off at different stations.
Effective altruists sometimes talk about this by asking each other: “Where do you get off the train to Crazy Town?”
I like to picture a rail line with 3 stations:
🚂weak longtermism
🚂strong longtermism
🚂galaxy-brain longtermism
Weak longtermism = “the long-term future matters more than we’re giving it credit for & we should do more to help it.”
Care about climate? This one's probably you
Strong longtermism = “the long-term future matters more than anything else, so it should be our top priority.”
Galaxy-brain longtermism = “the long-term future matters more than anything else, so we should take big risks to ensure not only that it exists, but that it’s utopian!”
Longtermism is already influencing powerful people, from politicians to billionaires (@elonmusk cites it...) so it really matters *which* version of longtermism gains currency. Weak longtermism is a commonsense view but there are serious objections to strong longtermism, like:
1⃣ It’s ludicrous to chase tiny probabilities of enormous payoffs. If you can save a million lives today or shave 0.0001% off the probability of human extinction, you should do the former, not the latter as strong longtermism's logic implies!
2⃣ We can’t reliably predict the effects of our actions in 1 year, never mind 1000 years, so it makes no sense to invest a lot of resources in trying to positively influence the far future. Acknowledging our cluelessness means limiting ourselves to the stuff we KNOW will do good.
3⃣ It’s downright unjust: People living in poverty today need our help NOW. If strong longtermists reallocate millions from present to future people, it harms present people by depriving them of funding for e.g. healthcare or housing. Those are arguably basic, inviolable rights.
Reading @willmacaskill's new longtermism book, I was struck by what he says on the last page: “How much should we in the present be willing to sacrifice for future generations? I don’t know the answer to this.”
But this is THE key question. It decides where we get off the train.
Last train stop: galaxy-brain longtermism. It says we should settle the stars. Not just can, but should, because we have a duty to catapult humanity out of a precarious earthbound adolescence into a flourishing interstellar adulthood.
Are you getting a whiff of Manifest Destiny?
.@willmacaskill doesn't endorse galaxy-brain longtermism: getting to a multi-planetary future may be important but doesn't trump all other moral constraints. But I asked him if that distinction is too subtle by half.
“Yeah, too subtle by half," he said, "maybe that’s accurate.”
I think the debate about #EffectiveAltruism and #longtermism has become horribly confused. Some of the most vociferous critics are conflating different “train stations.” They don’t seem to realize that weak longtermism ≠ strong longtermism ≠ galaxy-brain longtermism. But...
That's not really the critics's fault. Longtermism runs on a series of ideas that link together like train tracks. And when the tracks are laid down in a direction that leads to Crazy Town, that increases the risk that some travelers will head, well, all the way to Crazy Town.
I think there's a better way to lay down tracks to caring about the future — a way that doesn't run such a high risk of leading us to Crazy Town. We can acknowledge that there are multiple sources of moral value and gather diverse POVs on how to divvy up resources between them.
EA is very Global North & that's not just a problem on the level of racial diversity, it's a problem on the level of ideology. Intellectual insularity is bad for any movement, but it’s egregious for one that purports to represent the interests of all humans now & for all eternity
Effective altruism is Big Politics — it's dealing with questions about how to distribute all of humanity's resources. This shouldn't be up to a few powerful people to decide. Charting the future of humanity should be much more democratic.
As @CarlaZoeC told me: “I think EA has figured out how to have impact. They are still blind to the fact that whether or not that impact is positive or negative over the long term depends on politics. I don’t think they realize that in fact they are a political movement.”
I wrote this piece because EA/longtermism is doing politics on a global, even galactic scale — tons at stake! — yet the debate around it is still muddy. I tried to make it clearer here so we can critique the real thing, not a strawman. Please read & share! vox.com/future-perfect…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
People speculated that Ilya saw AGI...as if OpenAI was hiding some conscious, shackled AI in the basement.
But reporting this out, I thought: This is not a horror story about AI. This is a horror story about humans.
Like in 2001: A Space Odyssey, the issue wasn't HAL lying.
Everyone owes massive gratitude to ex-OpenAI folks who are speaking out. It was refreshing to see @janleike's thread today. And by refusing to sign an NDA, @DKokotajlo67142 gave up an insane amount of money so that he'd be free to criticize the company. That's real integrity.
You’ve probably been hearing lots about Israel — but not about Jews with roots in the Arab and Muslim world. They’re over half of Israel’s Jewish population, yet the American media barely covers them.
So let me tell you a story about my family. 🧵 1/18
My dad’s side is from Iraq, where Jews lived for 2,000 years and were deeply integrated into Arab society. Jews spoke Arabic and made up 1/3 of Baghdad’s population. We were everywhere — in parliament, the judicial system, the music scene. Here’s what my family looked like. 2/18
My mom’s side is from Morocco, where Jews cultivated deep friendships with Muslim neighbors — so deep that, when I visited Morocco and found a 90-year-old man who’d known my family 70 years ago, he got so excited that he shouted my grandfather’s name over and over with glee. 3/18
@SBF_FTX 1) EA skews heavily utilitarian. It teaches people to maximize the overall good. That’s a dangerous ethos unless you’re a god who somehow always knows what the good looks like. Per Holden Karnofsky:
2) EA leaders can point to places where they’ve said “the ends don’t justify the means” or “respect commonsense morality.” But the dominant message of EA is “here’s a way to think that’s BETTER and SMARTER than commonsense morality.” That’s kinda the whole point of EA.
Thing is, there’s no 1 definition of fairness. Fairness can have many different meanings — at least 21 by @random_walker’s count! — and those meanings are sometimes incompatible with each other. 2/8
Let’s say your job is to give out loans. Procedural fairness says your lending algorithm is fair if the procedure it uses to make decisions is fair, e.g. anyone with FICO >600 gets a loan. But some racial groups are less likely to have FICO >600 due to historical inequities. 3/8