Hypothesis: we’ll look back on mass migration as being worse for Europe than WW2 was.
Europe recovered quickly from WW2, because each country remained high-trust and homogeneous.
But you can’t just rebuild your way out of internal ethno-religious fractures.
Why compare mass migration with WW2 specifically? Because the “never again, at any cost” attitude towards WW2 from European elites has been a major cultural force pushing against national identity and for suicidal immigration policies.
Large-scale remigration will help, but it’ll be hard to do without many other undesirable effects (e.g. strengthening coercive power of the state, massive internal unrest, political polarization).
So a lot of the damage is already locked in.
I posted this poll with this hypothesis in mind.
Ofc people disagree on what percent of immigrants have values “deeply hostile” to European values.
But many respondents believe that even 10% in that category is worse than literal decimation.
“Costly signaling” is one of the most important concepts but has one of the worst names.
The best signals are expensive for others - but conditional on that, the cheaper they are for you the better!
We should rename them “costly-to-fake signals”.
Consider an antelope stotting while being chased by lions. This is extremely costly for unhealthy antelopes, because it makes them much more likely to be eaten. But the fastest antelopes might be so confident the lion will never catch them that it’s approximately free for them.
Or consider dating. If you have few options, playing hard to get is very costly: if your date loses interest you’ll be alone.
But if you have many romantic prospects it’s not a big deal if one loses interest.
So playing hard to get is a costly-to-fake (but not costly) signal!
I became a virtue ethicist after observing the failures of consequentialism and deontology in the real world.
But I’ve seldom read academic philosophers analyzing such examples when arguing about which ethical theory to endorse.
What are the best examples of that?
Philosophers like Singer have made arguments that, *given* a certain ethical view, real-world evidence should motivate certain actions.
But that’s different from saying that the real-world evidence should motivate the ethical view in the first place.
There are a bunch of arguments floating around that roughly say “utilitarianism led to eugenics, therefore it’s bad”. I’ve been pretty unimpressed by such arguments when I’ve read them in the past, but I’d appreciate pointers to unusually good ones.
Modernity is a war of high and low against middle not just for classes, but also for levels of societal structure.
The power of middle-sized groups (like families, communities and states) is flowing both down to individuals, and up to international organizations and ideologies.
Power flowing out from the middle is generally negative-sum though, because high and low are too different to collaborate productively.
So you get big governments and global ideologies ruling over increasingly dysfunctional and atomized societies.
I should clarify that, in the case of class, what I’m calling the middle is not the traditional “middle class”, but basically anyone with a stable job.
Conversely, the “low” are a constantly growing underclass: long-term welfare recipients, criminals, illegal immigrants, etc.
To be clear, I don’t think it’s a viable strategy to stay fully hands-off the coming AI revolution, any more than it would have been for the Industrial Revolution.
But it’s particularly jarring to see the *evals* people leverage their work on public goods to go accelerationist.
This is why I’m a virtue ethicist now. No rules are flexible enough to guide us through this. And “do the most valuable thing” is very near in strategy space to “do the most disvaluable thing”.
So focus on key levers only in proportion to how well-grounded your motivations are.
We're heading towards a world where, in terms of skills and power, AIs are as far above humans as humans are above animals.
Obviously this has gone very badly for animals. So in a recent talk I ask: what political philosophy could help such a future go well?
The history of politics is a tug-of-war between the rule of "innately superior" aristocrats and blank-slate egalitarianism.
But these are both essentialist philosophies which deny empirical truths.
Instead, the duty of skilled/powerful elites should be to empower everyone else.
Empowerment contrasts with welfarism, the view that elites should *look after* everyone else. Welfarism is fragile, since elites can use it as a pretext for consolidating power (as e.g. colonizers did).
We'd like AIs to empower humans without ever consolidating power themselves.
This essay is much more misleading than insightful, for (at least) two reasons:
1. The concept of AGI fully substituting for human labor is an incoherent one because humans have inherent advantages at some jobs simply because they're human. This can arise via consumer preferences (e.g. therapists), political considerations (e.g. lobbyists) or regulations (e.g. judges). As AI automates everything else, Baumol's effect predicts that such jobs become a large proportion of the economy.
It's fine to set up a naive econ model which ignores these, but it's irresponsible to give many pages of arguments about the implications of that naive model for the economy while relegating these crucial factors I mentioned above to one sentence in the conclusion. The way the essay is framed makes it hard for people who don't already know why it's wrong to realize how fragile the arguments are.
2. The essay claims that "as we continue innovating, we will eventually enter [a] second regime... in which we approach the physical limits of technological progress". This is true. It is also *ridiculously* distant. Forget Dyson spheres, this regime is one where we've figured out how to move stars around and colonize whole new galaxies and break at least a few things we currently consider fundamental laws of science.
Trying to appeal to this regime to draw *any* conclusions about human wages is absurd. None of the main concepts in this essay are robust enough that we can meaningfully extrapolate them that far. The essay is talking about "human wages" in a setting so futuristic that what even counts as "human" will likely be unrecognizable (due to genetic engineering/uploading/merging with AIs/etc).
The overall lesson: when you're reasoning about world-historic changes, you can't just take standard concepts that we use today, do some basic modeling, and run with that. All the hard work is in figuring out how our current concepts break when extrapolated into this new regime, and what to replace them with.
I'm criticising this more directly than I usually would because I recently called out someone else's similarly-ungrounded forecasts about the economy, and as part of that thread made these very points to Matthew.