Richard Ngo Profile picture
Jun 18 7 tweets 2 min read Read on X
Hypothesis: we’ll look back on mass migration as being worse for Europe than WW2 was.

Europe recovered quickly from WW2, because each country remained high-trust and homogeneous.

But you can’t just rebuild your way out of internal ethno-religious fractures.
Why compare mass migration with WW2 specifically? Because the “never again, at any cost” attitude towards WW2 from European elites has been a major cultural force pushing against national identity and for suicidal immigration policies.

For more on that see:
Large-scale remigration will help, but it’ll be hard to do without many other undesirable effects (e.g. strengthening coercive power of the state, massive internal unrest, political polarization).

So a lot of the damage is already locked in.
I posted this poll with this hypothesis in mind.

Ofc people disagree on what percent of immigrants have values “deeply hostile” to European values.

But many respondents believe that even 10% in that category is worse than literal decimation.
Also relevant: this post of mine on a concept I call well-foundedness - roughly, “how far down does an agent’s internal coherence go”?

It’s much more abstract but inspired by many of the same ideas as this thread.
I should clarify: *Western* Europe recovered quickly from WW2. Eastern Europe didn’t, due to Soviet occupation. So there are big path-dependencies.

But Eastern Europe did recover quickly after the fall of the USSR, because even the Soviets didn’t destroy their national cohesion.
Some people are incredulous about this hypothesis because of the sheer number who died in WW2.

But being a functional society compounds over decades to benefit billions.

E.g. China is far better off than India today, even after enduring the horrors of the Great Leap Forward.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Richard Ngo

Richard Ngo Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @RichardMCNgo

Jun 3
“Costly signaling” is one of the most important concepts but has one of the worst names.

The best signals are expensive for others - but conditional on that, the cheaper they are for you the better!

We should rename them “costly-to-fake signals”.
Consider an antelope stotting while being chased by lions. This is extremely costly for unhealthy antelopes, because it makes them much more likely to be eaten. But the fastest antelopes might be so confident the lion will never catch them that it’s approximately free for them.
Or consider dating. If you have few options, playing hard to get is very costly: if your date loses interest you’ll be alone.

But if you have many romantic prospects it’s not a big deal if one loses interest.

So playing hard to get is a costly-to-fake (but not costly) signal!
Read 4 tweets
May 5
I became a virtue ethicist after observing the failures of consequentialism and deontology in the real world.

But I’ve seldom read academic philosophers analyzing such examples when arguing about which ethical theory to endorse.

What are the best examples of that?
Philosophers like Singer have made arguments that, *given* a certain ethical view, real-world evidence should motivate certain actions.

But that’s different from saying that the real-world evidence should motivate the ethical view in the first place.
There are a bunch of arguments floating around that roughly say “utilitarianism led to eugenics, therefore it’s bad”. I’ve been pretty unimpressed by such arguments when I’ve read them in the past, but I’d appreciate pointers to unusually good ones.
Read 4 tweets
Apr 20
Modernity is a war of high and low against middle not just for classes, but also for levels of societal structure.

The power of middle-sized groups (like families, communities and states) is flowing both down to individuals, and up to international organizations and ideologies.
Power flowing out from the middle is generally negative-sum though, because high and low are too different to collaborate productively.

So you get big governments and global ideologies ruling over increasingly dysfunctional and atomized societies.
I should clarify that, in the case of class, what I’m calling the middle is not the traditional “middle class”, but basically anyone with a stable job.

Conversely, the “low” are a constantly growing underclass: long-term welfare recipients, criminals, illegal immigrants, etc.
Read 4 tweets
Apr 17
The AI safety community is very good at identifying levers of power over AI - e.g. evals for the most concerning capabilities.

Unfortunately this consistently leads people to grab those levers “as soon as possible”.

Usually it’s not literally the same people, but here it is.
To be clear, I don’t think it’s a viable strategy to stay fully hands-off the coming AI revolution, any more than it would have been for the Industrial Revolution.

But it’s particularly jarring to see the *evals* people leverage their work on public goods to go accelerationist.
This is why I’m a virtue ethicist now. No rules are flexible enough to guide us through this. And “do the most valuable thing” is very near in strategy space to “do the most disvaluable thing”.

So focus on key levers only in proportion to how well-grounded your motivations are.
Read 4 tweets
Mar 26
We're heading towards a world where, in terms of skills and power, AIs are as far above humans as humans are above animals.

Obviously this has gone very badly for animals. So in a recent talk I ask: what political philosophy could help such a future go well?
The history of politics is a tug-of-war between the rule of "innately superior" aristocrats and blank-slate egalitarianism.

But these are both essentialist philosophies which deny empirical truths.

Instead, the duty of skilled/powerful elites should be to empower everyone else.
Empowerment contrasts with welfarism, the view that elites should *look after* everyone else. Welfarism is fragile, since elites can use it as a pretext for consolidating power (as e.g. colonizers did).

We'd like AIs to empower humans without ever consolidating power themselves.
Read 5 tweets
Jan 25
This essay is much more misleading than insightful, for (at least) two reasons:

1. The concept of AGI fully substituting for human labor is an incoherent one because humans have inherent advantages at some jobs simply because they're human. This can arise via consumer preferences (e.g. therapists), political considerations (e.g. lobbyists) or regulations (e.g. judges). As AI automates everything else, Baumol's effect predicts that such jobs become a large proportion of the economy.

It's fine to set up a naive econ model which ignores these, but it's irresponsible to give many pages of arguments about the implications of that naive model for the economy while relegating these crucial factors I mentioned above to one sentence in the conclusion. The way the essay is framed makes it hard for people who don't already know why it's wrong to realize how fragile the arguments are.

2. The essay claims that "as we continue innovating, we will eventually enter [a] second regime... in which we approach the physical limits of technological progress". This is true. It is also *ridiculously* distant. Forget Dyson spheres, this regime is one where we've figured out how to move stars around and colonize whole new galaxies and break at least a few things we currently consider fundamental laws of science.

Trying to appeal to this regime to draw *any* conclusions about human wages is absurd. None of the main concepts in this essay are robust enough that we can meaningfully extrapolate them that far. The essay is talking about "human wages" in a setting so futuristic that what even counts as "human" will likely be unrecognizable (due to genetic engineering/uploading/merging with AIs/etc).

The overall lesson: when you're reasoning about world-historic changes, you can't just take standard concepts that we use today, do some basic modeling, and run with that. All the hard work is in figuring out how our current concepts break when extrapolated into this new regime, and what to replace them with.
I'm criticising this more directly than I usually would because I recently called out someone else's similarly-ungrounded forecasts about the economy, and as part of that thread made these very points to Matthew.

Linking the final tweet in that thread:
I'm starting to think that people publicly assigning credences to their claims is actually a significant part of the problem.

It's helpful for simple claims. For complex/ambiguous claims it ends up as just another type of vibe manipulation.

Related: mindthefuture.info/p/why-im-not-a…
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(