By the way, everyone-who's-disagreeing-with-me-about-AI-risk-on-twitter,
This video is a great introduction to the problem as I, and others I know, think of it. So if you want to make counter arguments, it might be helpful to respond to it.
You might dispute some part of this framing, but it would be good to understand why I'm / we're using it in the first place.
(For instance, it isn't an arbitrary choice to represent goals as a utility function. It solves a specific problem of formalization.)
And if you want to go further than that, @robertskmiles, makes excellent explainer videos on more specific AI Risk problems.
His youtube channel is my go-to recommendation for people who are trying to get up to speed on the shape of the problem.
In the past few months I've shifted my implicit thinking about meditation and enlightenment.
I've gone from thinking:
"Enlightenment is probably a real thing, and probably related to processing epistemic technical debt somehow.
Probably it also has something to do with noticing the 'edges' of how you're projecting your reality, and getting a visceral sense of the difference between 'the movie' and 'the screen the movie is projected on.'
In particular, enlightenment (probably) is or is the result of progressing far enough down a particular psychological axis, in the "good direction".
Including many moves that I actively teach. Embarrassing!
In particular, given the number of people responding to me I've fallen into a pattern of giving counter arguments to specific, false (in my view) claims, without checking / showing that I've understood the claims.
So (aided by @VictorLevoso's example in a private correspondence), I'm going to offer a paraphrase of my current understanding of the Crit Rat view on AI risk, in a central place where everyone can respond at once.
This quoted text seems really important. How societies and individual institutions adapt to the pandemic, is probably the thing that dominates the "sign" of the impact of the pandemic.
I agree that COVID does seem to be right in our Goldilocks zone: not civilization-hobbling in the long term, but bad enough to cause us collectively to take notice and (ideally) to face up to and correct the flaws in our systems.
It's extreme enough that we have to try possibly radical ideas that wouldn't usually see the light of day in order to succeed.
But it looks like that barely happened at all. It seemed like there was very little innovation.
Similarly, if you think I'm foundationally confused, or my frame here is not even wrong, I'd also love to hear that.
I'm aware that the are mathematical Crit Rat critiques that claim to undermine Bayes. I'll also want those eventually, but I'm considering that a separate thread that I'll take in sequence.
So feel free to send me links to that sort of thing, but I won't engage with them, yet.
The most unrealistic thing about an iron man suit?
The fingers!
There's not that much space between your digits. It would be uncomfortable and impractical to put layers of metal in those gaps. And if you did, they would be too thin to provide much protection.
And the fingers a also have to bend, which means you have even less space for material, and even less protection.
It would make much more sense if the gloves of the iron man suit were like mittens, with all the fingers in one chunk. Then you can put strong layers of metal around all the fingers at once.
I had a dream in which I considered tweeting to ask Dick Grayson why he became a police officer, when he was already Nightwing (which is kind of a substitute for a police officer).
But then I realized that I couldn't do that because it would reveal is secret identify.
Only later did I realize that I couldn't do that because it Dick Grayson is fictional.
But nevertheless, I am still left with the original question. Wouldn't it better to put your resources into one crime-fighting profession or the other?