In theory "rationalists should win". And I sure as heck think that you can use thinking to figure out how to win, in many domains.
But that doesn't mean that anything that is properly called rationality is the first order factor for success.
It turns out, that in most domains, individual success depends on things that are generally orthogonal to rationality, like working really hard, and being generally emotionally well-adjusted.
Sometimes people almost-notice this, and are tempted to redefine "rationality" (which is "the art of systematized winning" after all) to include whatever helps you win.
If it turns out to be working hard, then that's what rationality is!
I think that this kind of definition creep is a mistake.
Not all virtues are rationality.
Empirically, it seems that "having true beliefs" is, in principle, orthogonal to "the things that tend to help you win", and in practice, somewhat anti-correlated.
(Rationality requires a lot of disagreeability, which is often not adaptive, for instance.)
And I think that it is pretty important for there to be people to be committed to believing true things. So I would rather that those people not be committed to believing that "believing true things helps you win".
I might write a longer post about this sometime.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@Meaningness@ESRogs@ESYudkowsky@AnnaWSalamon@juliagalef The basic reason was that I was frustrated with philosophy (from the philosophy I had seen so far), and I saw this guy apparently making progress on philosophy and not getting bogged down in the basics.
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
Reading about the history of slavery, one new realization is how terrified the South was of a slave uprising.
The masters lived in fear that one day the slaves would rise up and either 1) murder the whites in their beds or 2) turn the tables and enslave the whites.
That was a persistent background fear.
Most sources talk about the motivation for the civil war (and other stuf) as "protecting the Southern way of life" and that does seem like a real thing. But I think "the southern way of life" was wrapped up with a visceral fear for their lives.
This seems to me like a deeply important thread, which I think I should work to wrap my head around. If the basic conceit is true, I think it should impact my world view about as much as say...learning about game theory.
@Insect_Song But...the label "bad actor." I think that that label is useful, and I don't particularly dispute it's use here, but that doesn't mean that I don't think it is useful to empathize with the internal state of bad-actors (unless you're doing that as insulation from manipulation).
@Insect_Song "Bad actor" to me, is like a boundary that a person is setting, but it doesn't preclude understanding the fuck up that results from conflicting first-person perspectives that are each laying claim to some burden of proof thing.
@Insect_Song Like, the thing that is happening there seems like an usually crisp example of a thing that is happening all the time, between people who are behaving correctly in their own world.
This is an amazing case study in poor communication. Everyone I talk to is is much better than this, but the dynamics here are writ-large versions of mistakes that we are probably making.
I'm looking at this thinking "What went wrong here, and what general pattern, or piece of skill, would have been needed to avoid what went wrong?"
First pass: Is the core thing that's happening here about which things should be assumed to be willful misunderstanding and which things should be assumed to to be honest mistakes? That is, where do you allocate charity?