If you have only $10 it is probably really smart money because you’re going to think hard and object level about spending it
If you have $10B, it’s being deployed mostly in > $250m chunks via org charts with 7 levels of bs theories
The largest object level thing you might ever buy even as a billionaire is probably like a car. Anything bigger, you’re actually buying a theory of ownership with multiple levels of abstraction each with assumptions.
For eg. buying a refurbished aircraft carrier — probably biggest “existing thing” that is ever bought — means buying training, maintenance, technology transfer, etc. Above that, retrofit/upgrade roadmaps, aircraft options, fuel futures… it looks like a “thing” but is not.
Huge state purchases like *new* aircraft carriers, bridges, space tech, etc. you’re buying a multi-level theory of development.
This is actually the problem with EA at scale. The philosophical principle is sound at $1000, sketchy at $1m, sketchy at $100m, and relish at $1b.
You cannot deploy >$250m on *anything* without a large bureaucratic org thinking through the details at multiple levels of abstraction, each vulnerable to capture by a bullshit theory.
Even if object level is uncontroversial like “feed children”
I know nothing about SBF personally… what mix of stupid/unlucky/unethical/moral-hazard etc was involved in the meltdown (Matt Levine explanation of using your own token as collateral seems to be all 4). But the EA thing is the novel element here.
Dunno if he was a sincere believer in the philosophy of it was some sort of influence theater larp for him, but trying to do good at the scale of billions within 1 lifetime has all the same dumb-big-money problems as deploying it for any other reason.
You need multiple models of reality as you scale. I’d say if N=$ amount you want to deploy, you should use log_10(N) theories to think about it. So $1m = 6 mental models. $1B = 9. Regardless of purpose.
EA is just *one* mental model of philanthropy. So good by itself up to $10.
Ugh major typos above
The philosophical principle is sound at $1000, sketchy at $1m, sketchy at $100m, and relish at $1b.
=
The philosophical principle is sound at $1000, sketchy at $1m, very sketchy at $100m, and religion at $1b.
I’ve supported people in making big money decisions but have not myself ever bought anything bigger than a car. That’s borderline between object level and theorized. We’ve been shopping for a house for the first time and it feels clearly like “buying a full-stack theory of life”
I’ve seen singularitarians express an astonishing sort of worry, that “obviously” the highest leverage kind of future-utility-maxxing EA-giving is to AI risk and that seems a little too easy (afaict this is why this crowd loves EA like PB&J)
Really? Ya think?
Fun math problem of the sort they’re actually geniuses at but never seem to do. If your theory of “Spend X$ on Y” rests on 7 layers of abstraction, and you’re 90% sure your thinking at each level is sound, what are the chances you’ll reach the right conclusion?
0.9^7 = 0.48.
This sort of thing has long been my main critique of wealth inequality. It’s not really a critique of EA in particular, but *any* single theory that an org proportionate in size to log(wealth) must embody to deploy wealth.
Large wealth concentrations produce stupidity at scale, *whatever* the theory and purpose of deployment. The most “effective” thing you can do is fragment it to the point it’s not quite as dumb. Unless the thing itself requires concentration, like a space program.
When people say they want “market-based” solutions to problems instead of “massive” state programs, the underlying intuition is not about markets so much as it’s about maximum scale of deployment an individual or closed org gets to do without orders from “above”
A “market-based” solution which leads to a huge corporation spending $1b government order via internal hierarchical decision-making is actually worse than a $1b government program that’s deployed as 40 $250k grants to smaller agencies. Latter is actually more market-like.
Of course this is not always possible. Not all problems can be partitioned this way. If you want to allocate $1b to a space program, giving 40 cities $250m to start 40 space programs is dumb. The problem requires concentration. But within physics constraints, unbundle the spend.
Heh sorry but ironically illustrates the point of errors creeping in with abstraction. 1b/250k is 4000 not 40. Plus I typoed it elsewhere as 250m (which would be 4)
people have made this sort of error while actually spending real money, not just shitposting…
I promise if someone gives me 1b to deploy, I’ll use an excel spreadsheet to do the arithmetic properly and hire an intern to crosscheck it it for decimal point and order of magnitude errors
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ 20, I am pleased to officially announce the Summer of Protocols (SoP) program, along with a draft of the pilot study that led to it, The Unreasonable Sufficiency of Protocols (TUSoP), which I've been working on with a bunch of collaborators for the last 3 months.
2/ The program will be primarily virtual, and run for 18 weeks from May-August. It will fund a set of full-time Core Researchers and part-time Affiliate Researchers (primarily in the second half) to think broadly and creatively about protocols. summerofprotocols.com
3/ The goal of the program is to catalyze conversation and experimentation around all kinds of protocols, including cultural, social and political ones. We want to get the world thinking in "protocol-first" ways and foster what we call protocol literacy.
Over the last 3 years with the @yak_collective I’ve really come to appreciate the power of committing a small amount of weekly time over a long period. If you have 10 hours to spare for me, I’ll pretty much always pick an hour a week for 10 weeks over 10 hours in 1 day .
Lifestyles tend to be stable for 3-5y at a time. If you commit 1 hour/wk indefinitely, that’s implicitly 150-250 hours if it sticks. Equal to 4-6 weeks of full-time, but that’s harder to use 🤔
An hour is optimal. Can’t do much with 15-30min, but >1h calls for too much org/prep.
It *sounds* powerful to get 4-6 weeks full-time commitment from a talented person (especially skilled ones who can code or design etc) but it’s actually useless because 4-6 weeks means you can create something complex enough to need maintenance/follow through.
I've been noodling on an idea for a while that I've been reluctant to do a thread on for... reasons that will become obvious, but let's yolo it. I call the idea "charismatic epistemologies." Aka... how successful people explain the world, and how those explanations fail.
I've been reluctant to do this thread because it runs the risk of specific successful people I know thinking I'm subtweeting them, which is ironic, because a big feature of charismatic epistemology is believing things are about you when they are not.
My n size for this theory is probably several dozens. I've been around people who are far more talented and successful than me for like 30 years now, and sort of figured out how to free ride in their slipstreams. Sometimes parasitically, sometimes symbiotically.
Thinking about my thread this morning on why independent research is hard, and what it would take to make it possible, and whether it’s within the reach of private investors who ALL complain endlessly about how they have far too much capital and don’t know where to put it.
On one extreme you can think UBI, which is roughly ~ early grad student level $.
On the other extreme, you could think of early career faculty grants.
An NSF CAREER grant is 100k/year for 5 yrs, and in 2018, about 150 million was disbursed or about 300.
A subset of ~20 get PECASE awards which push up the 100k to 500k/yr, sp that’s another 40 million. This 190 million basically supports 300 new faculty every year which I think is approximately ALL new faculty in say the top 25-30 universities.
I was briefly calling myself an independent researcher: somebody who self-funds spec R&D on their own ideas. In theory it’s something like indie-research : academic research :: blogging/self-publishing : traditional publishing.
But the idea doesn’t really work.
Unlike the market for general interest writing, the market for R&D is almost entirely institutional, and they don’t really “buy” indie R&D. It’s 99.9% crackpot inventors, 0.1% black swan stuff.
Most “research” indie consultants sell institutions is in the market-research class, not academic research
99% of the questions people ask in their 20s and early 30s are roughly the same seemingly “important” ones everybody has always asked at those ages. And 99% come up with roughly the same answers ranging from pretty dumb to reasonably smart regardless of effort.
The 1% different answers people come up with might make them somewhat more famous/rich, but are rarely different enough to change much beyond their own lives. The age-old questions are age old because the answers are in our collective diminishing marginal returns zone.
They are important, like air or water, but they aren’t wellsprings of meaning. How to make money, how to get laid, how politics works, who is good/bad, how to choose friends. You’ll spend 99% of your time on this stuff getting to useful and necessary but uninteresting places.