The larger the pile of money the dumber it is
If you have only $10 it is probably really smart money because you’re going to think hard and object level about spending it
If you have $10B, it’s being deployed mostly in > $250m chunks via org charts with 7 levels of bs theories
The largest object level thing you might ever buy even as a billionaire is probably like a car. Anything bigger, you’re actually buying a theory of ownership with multiple levels of abstraction each with assumptions.
For eg. buying a refurbished aircraft carrier — probably biggest “existing thing” that is ever bought — means buying training, maintenance, technology transfer, etc. Above that, retrofit/upgrade roadmaps, aircraft options, fuel futures… it looks like a “thing” but is not.
Huge state purchases like *new* aircraft carriers, bridges, space tech, etc. you’re buying a multi-level theory of development.
This is actually the problem with EA at scale. The philosophical principle is sound at $1000, sketchy at $1m, sketchy at $100m, and relish at $1b.
You cannot deploy >$250m on *anything* without a large bureaucratic org thinking through the details at multiple levels of abstraction, each vulnerable to capture by a bullshit theory.
Even if object level is uncontroversial like “feed children”
I know nothing about SBF personally… what mix of stupid/unlucky/unethical/moral-hazard etc was involved in the meltdown (Matt Levine explanation of using your own token as collateral seems to be all 4). But the EA thing is the novel element here.
Dunno if he was a sincere believer in the philosophy of it was some sort of influence theater larp for him, but trying to do good at the scale of billions within 1 lifetime has all the same dumb-big-money problems as deploying it for any other reason.
You need multiple models of reality as you scale. I’d say if N=$ amount you want to deploy, you should use log_10(N) theories to think about it. So $1m = 6 mental models. $1B = 9. Regardless of purpose.
EA is just *one* mental model of philanthropy. So good by itself up to $10.
Ugh major typos above
The philosophical principle is sound at $1000, sketchy at $1m, sketchy at $100m, and relish at $1b.
=
The philosophical principle is sound at $1000, sketchy at $1m, very sketchy at $100m, and religion at $1b.
I’ve supported people in making big money decisions but have not myself ever bought anything bigger than a car. That’s borderline between object level and theorized. We’ve been shopping for a house for the first time and it feels clearly like “buying a full-stack theory of life”
I’ve seen singularitarians express an astonishing sort of worry, that “obviously” the highest leverage kind of future-utility-maxxing EA-giving is to AI risk and that seems a little too easy (afaict this is why this crowd loves EA like PB&J)
Really? Ya think?
Fun math problem of the sort they’re actually geniuses at but never seem to do. If your theory of “Spend X$ on Y” rests on 7 layers of abstraction, and you’re 90% sure your thinking at each level is sound, what are the chances you’ll reach the right conclusion?
0.9^7 = 0.48.
This sort of thing has long been my main critique of wealth inequality. It’s not really a critique of EA in particular, but *any* single theory that an org proportionate in size to log(wealth) must embody to deploy wealth.
Large wealth concentrations produce stupidity at scale, *whatever* the theory and purpose of deployment. The most “effective” thing you can do is fragment it to the point it’s not quite as dumb. Unless the thing itself requires concentration, like a space program.
When people say they want “market-based” solutions to problems instead of “massive” state programs, the underlying intuition is not about markets so much as it’s about maximum scale of deployment an individual or closed org gets to do without orders from “above”
A “market-based” solution which leads to a huge corporation spending $1b government order via internal hierarchical decision-making is actually worse than a $1b government program that’s deployed as 40 $250k grants to smaller agencies. Latter is actually more market-like.
Of course this is not always possible. Not all problems can be partitioned this way. If you want to allocate $1b to a space program, giving 40 cities $250m to start 40 space programs is dumb. The problem requires concentration. But within physics constraints, unbundle the spend.
Heh sorry but ironically illustrates the point of errors creeping in with abstraction. 1b/250k is 4000 not 40. Plus I typoed it elsewhere as 250m (which would be 4)
people have made this sort of error while actually spending real money, not just shitposting…
I promise if someone gives me 1b to deploy, I’ll use an excel spreadsheet to do the arithmetic properly and hire an intern to crosscheck it it for decimal point and order of magnitude errors
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
