Venkatesh Rao ☀️ Profile picture
Nov 9, 2022 20 tweets 4 min read Read on X
The larger the pile of money the dumber it is

If you have only $10 it is probably really smart money because you’re going to think hard and object level about spending it

If you have $10B, it’s being deployed mostly in > $250m chunks via org charts with 7 levels of bs theories
The largest object level thing you might ever buy even as a billionaire is probably like a car. Anything bigger, you’re actually buying a theory of ownership with multiple levels of abstraction each with assumptions.
For eg. buying a refurbished aircraft carrier — probably biggest “existing thing” that is ever bought — means buying training, maintenance, technology transfer, etc. Above that, retrofit/upgrade roadmaps, aircraft options, fuel futures… it looks like a “thing” but is not.
Huge state purchases like *new* aircraft carriers, bridges, space tech, etc. you’re buying a multi-level theory of development.

This is actually the problem with EA at scale. The philosophical principle is sound at $1000, sketchy at $1m, sketchy at $100m, and relish at $1b.
You cannot deploy >$250m on *anything* without a large bureaucratic org thinking through the details at multiple levels of abstraction, each vulnerable to capture by a bullshit theory.

Even if object level is uncontroversial like “feed children”
I know nothing about SBF personally… what mix of stupid/unlucky/unethical/moral-hazard etc was involved in the meltdown (Matt Levine explanation of using your own token as collateral seems to be all 4). But the EA thing is the novel element here.
Dunno if he was a sincere believer in the philosophy of it was some sort of influence theater larp for him, but trying to do good at the scale of billions within 1 lifetime has all the same dumb-big-money problems as deploying it for any other reason.
You need multiple models of reality as you scale. I’d say if N=$ amount you want to deploy, you should use log_10(N) theories to think about it. So $1m = 6 mental models. $1B = 9. Regardless of purpose.

EA is just *one* mental model of philanthropy. So good by itself up to $10.
Ugh major typos above

The philosophical principle is sound at $1000, sketchy at $1m, sketchy at $100m, and relish at $1b.

=

The philosophical principle is sound at $1000, sketchy at $1m, very sketchy at $100m, and religion at $1b.
I’ve supported people in making big money decisions but have not myself ever bought anything bigger than a car. That’s borderline between object level and theorized. We’ve been shopping for a house for the first time and it feels clearly like “buying a full-stack theory of life”
I’ve seen singularitarians express an astonishing sort of worry, that “obviously” the highest leverage kind of future-utility-maxxing EA-giving is to AI risk and that seems a little too easy (afaict this is why this crowd loves EA like PB&J)

Really? Ya think?
Fun math problem of the sort they’re actually geniuses at but never seem to do. If your theory of “Spend X$ on Y” rests on 7 layers of abstraction, and you’re 90% sure your thinking at each level is sound, what are the chances you’ll reach the right conclusion?

0.9^7 = 0.48.
This sort of thing has long been my main critique of wealth inequality. It’s not really a critique of EA in particular, but *any* single theory that an org proportionate in size to log(wealth) must embody to deploy wealth.
Large wealth concentrations produce stupidity at scale, *whatever* the theory and purpose of deployment. The most “effective” thing you can do is fragment it to the point it’s not quite as dumb. Unless the thing itself requires concentration, like a space program.
When people say they want “market-based” solutions to problems instead of “massive” state programs, the underlying intuition is not about markets so much as it’s about maximum scale of deployment an individual or closed org gets to do without orders from “above”
A “market-based” solution which leads to a huge corporation spending $1b government order via internal hierarchical decision-making is actually worse than a $1b government program that’s deployed as 40 $250k grants to smaller agencies. Latter is actually more market-like.
Of course this is not always possible. Not all problems can be partitioned this way. If you want to allocate $1b to a space program, giving 40 cities $250m to start 40 space programs is dumb. The problem requires concentration. But within physics constraints, unbundle the spend.
Heh sorry but ironically illustrates the point of errors creeping in with abstraction. 1b/250k is 4000 not 40. Plus I typoed it elsewhere as 250m (which would be 4)
people have made this sort of error while actually spending real money, not just shitposting…
I promise if someone gives me 1b to deploy, I’ll use an excel spreadsheet to do the arithmetic properly and hire an intern to crosscheck it it for decimal point and order of magnitude errors

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Venkatesh Rao ☀️

Venkatesh Rao ☀️ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @vgr

Mar 6, 2023
1/ 20, I am pleased to officially announce the Summer of Protocols (SoP) program, along with a draft of the pilot study that led to it, The Unreasonable Sufficiency of Protocols (TUSoP), which I've been working on with a bunch of collaborators for the last 3 months.
2/ The program will be primarily virtual, and run for 18 weeks from May-August. It will fund a set of full-time Core Researchers and part-time Affiliate Researchers (primarily in the second half) to think broadly and creatively about protocols. summerofprotocols.com
3/ The goal of the program is to catalyze conversation and experimentation around all kinds of protocols, including cultural, social and political ones. We want to get the world thinking in "protocol-first" ways and foster what we call protocol literacy.
Read 20 tweets
Oct 23, 2022
Over the last 3 years with the @yak_collective I’ve really come to appreciate the power of committing a small amount of weekly time over a long period. If you have 10 hours to spare for me, I’ll pretty much always pick an hour a week for 10 weeks over 10 hours in 1 day .
Lifestyles tend to be stable for 3-5y at a time. If you commit 1 hour/wk indefinitely, that’s implicitly 150-250 hours if it sticks. Equal to 4-6 weeks of full-time, but that’s harder to use 🤔

An hour is optimal. Can’t do much with 15-30min, but >1h calls for too much org/prep.
It *sounds* powerful to get 4-6 weeks full-time commitment from a talented person (especially skilled ones who can code or design etc) but it’s actually useless because 4-6 weeks means you can create something complex enough to need maintenance/follow through.
Read 32 tweets
Nov 17, 2019
Thinking about my thread this morning on why independent research is hard, and what it would take to make it possible, and whether it’s within the reach of private investors who ALL complain endlessly about how they have far too much capital and don’t know where to put it.
On one extreme you can think UBI, which is roughly ~ early grad student level $.

On the other extreme, you could think of early career faculty grants.

An NSF CAREER grant is 100k/year for 5 yrs, and in 2018, about 150 million was disbursed or about 300.
A subset of ~20 get PECASE awards which push up the 100k to 500k/yr, sp that’s another 40 million. This 190 million basically supports 300 new faculty every year which I think is approximately ALL new faculty in say the top 25-30 universities.
Read 32 tweets
Nov 16, 2019
I was briefly calling myself an independent researcher: somebody who self-funds spec R&D on their own ideas. In theory it’s something like indie-research : academic research :: blogging/self-publishing : traditional publishing.

But the idea doesn’t really work.
Unlike the market for general interest writing, the market for R&D is almost entirely institutional, and they don’t really “buy” indie R&D. It’s 99.9% crackpot inventors, 0.1% black swan stuff.
Most “research” indie consultants sell institutions is in the market-research class, not academic research
Read 34 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(