I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.
(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."
I wouldn't usually put it in those words, but THAT is what the alignment problem is about:
We don't yet know how to reliably build AGI systems _without_ "perverse emotions." It seems like that might be pretty hard to avoid.)
But maybe I'm misunderstanding these arguments.
I would love to dig into this with someone who thinks that AI is not a serious existential risk for reasons related to the above, and together try and answer the question of how these AI is most likely to go.
My win conditions:
1. I change my mind about AI risk, in some way 2. I understand some new-to-me argument that I need to think about in depth 3. I viscerally "get" what I'm missing from the CritRat frame 4. There's a public refutation of the arguments that turn out to be flawed
The most unrealistic thing about an iron man suit?
The fingers!
There's not that much space between your digits. It would be uncomfortable and impractical to put layers of metal in those gaps. And if you did, they would be too thin to provide much protection.
And the fingers a also have to bend, which means you have even less space for material, and even less protection.
It would make much more sense if the gloves of the iron man suit were like mittens, with all the fingers in one chunk. Then you can put strong layers of metal around all the fingers at once.
I had a dream in which I considered tweeting to ask Dick Grayson why he became a police officer, when he was already Nightwing (which is kind of a substitute for a police officer).
But then I realized that I couldn't do that because it would reveal is secret identify.
Only later did I realize that I couldn't do that because it Dick Grayson is fictional.
But nevertheless, I am still left with the original question. Wouldn't it better to put your resources into one crime-fighting profession or the other?
@Meaningness@ESRogs@ESYudkowsky@AnnaWSalamon@juliagalef The basic reason was that I was frustrated with philosophy (from the philosophy I had seen so far), and I saw this guy apparently making progress on philosophy and not getting bogged down in the basics.
In theory "rationalists should win". And I sure as heck think that you can use thinking to figure out how to win, in many domains.
But that doesn't mean that anything that is properly called rationality is the first order factor for success.
It turns out, that in most domains, individual success depends on things that are generally orthogonal to rationality, like working really hard, and being generally emotionally well-adjusted.
Reading about the history of slavery, one new realization is how terrified the South was of a slave uprising.
The masters lived in fear that one day the slaves would rise up and either 1) murder the whites in their beds or 2) turn the tables and enslave the whites.
That was a persistent background fear.
Most sources talk about the motivation for the civil war (and other stuf) as "protecting the Southern way of life" and that does seem like a real thing. But I think "the southern way of life" was wrapped up with a visceral fear for their lives.
This seems to me like a deeply important thread, which I think I should work to wrap my head around. If the basic conceit is true, I think it should impact my world view about as much as say...learning about game theory.