Similarly, if you think I'm foundationally confused, or my frame here is not even wrong, I'd also love to hear that.
I'm aware that the are mathematical Crit Rat critiques that claim to undermine Bayes. I'll also want those eventually, but I'm considering that a separate thread that I'll take in sequence.
So feel free to send me links to that sort of thing, but I won't engage with them, yet.
The most unrealistic thing about an iron man suit?
The fingers!
There's not that much space between your digits. It would be uncomfortable and impractical to put layers of metal in those gaps. And if you did, they would be too thin to provide much protection.
And the fingers a also have to bend, which means you have even less space for material, and even less protection.
It would make much more sense if the gloves of the iron man suit were like mittens, with all the fingers in one chunk. Then you can put strong layers of metal around all the fingers at once.
I had a dream in which I considered tweeting to ask Dick Grayson why he became a police officer, when he was already Nightwing (which is kind of a substitute for a police officer).
But then I realized that I couldn't do that because it would reveal is secret identify.
Only later did I realize that I couldn't do that because it Dick Grayson is fictional.
But nevertheless, I am still left with the original question. Wouldn't it better to put your resources into one crime-fighting profession or the other?
@Meaningness@ESRogs@ESYudkowsky@AnnaWSalamon@juliagalef The basic reason was that I was frustrated with philosophy (from the philosophy I had seen so far), and I saw this guy apparently making progress on philosophy and not getting bogged down in the basics.
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
In theory "rationalists should win". And I sure as heck think that you can use thinking to figure out how to win, in many domains.
But that doesn't mean that anything that is properly called rationality is the first order factor for success.
It turns out, that in most domains, individual success depends on things that are generally orthogonal to rationality, like working really hard, and being generally emotionally well-adjusted.
Reading about the history of slavery, one new realization is how terrified the South was of a slave uprising.
The masters lived in fear that one day the slaves would rise up and either 1) murder the whites in their beds or 2) turn the tables and enslave the whites.
That was a persistent background fear.
Most sources talk about the motivation for the civil war (and other stuf) as "protecting the Southern way of life" and that does seem like a real thing. But I think "the southern way of life" was wrapped up with a visceral fear for their lives.