In the past few months I've shifted my implicit thinking about meditation and enlightenment.
I've gone from thinking:
"Enlightenment is probably a real thing, and probably related to processing epistemic technical debt somehow.
Probably it also has something to do with noticing the 'edges' of how you're projecting your reality, and getting a visceral sense of the difference between 'the movie' and 'the screen the movie is projected on.'
In particular, enlightenment (probably) is or is the result of progressing far enough down a particular psychological axis, in the "good direction".
Something like your mind builds up all kinds of cruft, in the business of living, and OBVIOUSLY you're better off, ceteris paribus, if you clear it out, so you don't have that cruft, and are less constrained.
(Or maybe 'cruft' doesn't name the axis, and it is better to think in terms of 'building concentration power', or something, but it is still an axis being progressed along, if non-monotonically.)
Clearing that cruft (or fully clearing it) is desirable, to the point of being one of the things to check off on the list of 'necessary to be a fully realized / fully adult humans', but this one seems particularly costly to attain (tens of thousands of hours), so...
...I'm not really going to do much about it now, except maybe meditate on the margins as is practically useful for other stuff."
to
"It seems like there are actually a handful of different more-or-less robust mind-states that are called 'enlightenment', and none of them are _obviously_ better than the default.
Instead of thinking of this domain as a psychological axis that a person can traverse by training, and of which one end, all else being equal, is 'good', it makes more sense to think of this a wide state space, in which one can inhabit many different points.
Furthermore, it doesn't seem that the people who have attained any particular kind of enlightenment are obviously practically better for it.
It seems like it is compatible with 'bad [read abusive] behaviors.'
Several of the people who meditate a lot (@SamHarrisOrg), or who claim something-like-enlightenment (@Aella_Girl) do seem to be taking WAY more responsibility for their reactions to things than most people do which is super commendable, and worthwhile, and if the...
...meditation / enlightenment helps with that (as I suspect that it does), that is a good argument in its favor.
But it isn't like they don't get triggered at all, or that they are an order of magnitude less likely to get triggered than I am.
And if the hard-core people still have triggers and anxieties, it seems like triggers are not basically traumas that are cleared away by mediation.
It puts meditation in the category of "helpful boost", not "radical game-changer, required for full maturity as a human."
Furthermore, some people seem to...'melt into a puddle' from doing a lot of mediation / drugs. They become less of a person, or seem to become unmoored, or at least become actively worse at a number of competencies that seem relevant for doing things in the world.
So it not only does 'enlightenment' seem more like a space to explore instead of an axis to progress along, but it isn't as clear that there is an obvious 'good' direction.
It feels less like there's lever that will cause 'improvement' and more like there's a lever that will cause 'change', and there's less of simple principled way to evaluate whether the change is good or bad.
I don't obviously want to become enlightened."
As near as I can track, this shift has mainly come from...
1) @Aella_Girl's enlightenment interviews, and her own experiences (as discussed in Spencer Greenberg interview with her)
(Probably there's also some background variable about how I'm thinking about the world and the frequency of clear gradients of progression, that also changed, and is upstream of this change?)
Including many moves that I actively teach. Embarrassing!
In particular, given the number of people responding to me I've fallen into a pattern of giving counter arguments to specific, false (in my view) claims, without checking / showing that I've understood the claims.
So (aided by @VictorLevoso's example in a private correspondence), I'm going to offer a paraphrase of my current understanding of the Crit Rat view on AI risk, in a central place where everyone can respond at once.
By the way, everyone-who's-disagreeing-with-me-about-AI-risk-on-twitter,
This video is a great introduction to the problem as I, and others I know, think of it. So if you want to make counter arguments, it might be helpful to respond to it.
You might dispute some part of this framing, but it would be good to understand why I'm / we're using it in the first place.
(For instance, it isn't an arbitrary choice to represent goals as a utility function. It solves a specific problem of formalization.)
And if you want to go further than that, @robertskmiles, makes excellent explainer videos on more specific AI Risk problems.
His youtube channel is my go-to recommendation for people who are trying to get up to speed on the shape of the problem.
This quoted text seems really important. How societies and individual institutions adapt to the pandemic, is probably the thing that dominates the "sign" of the impact of the pandemic.
I agree that COVID does seem to be right in our Goldilocks zone: not civilization-hobbling in the long term, but bad enough to cause us collectively to take notice and (ideally) to face up to and correct the flaws in our systems.
It's extreme enough that we have to try possibly radical ideas that wouldn't usually see the light of day in order to succeed.
But it looks like that barely happened at all. It seemed like there was very little innovation.
Similarly, if you think I'm foundationally confused, or my frame here is not even wrong, I'd also love to hear that.
I'm aware that the are mathematical Crit Rat critiques that claim to undermine Bayes. I'll also want those eventually, but I'm considering that a separate thread that I'll take in sequence.
So feel free to send me links to that sort of thing, but I won't engage with them, yet.
The most unrealistic thing about an iron man suit?
The fingers!
There's not that much space between your digits. It would be uncomfortable and impractical to put layers of metal in those gaps. And if you did, they would be too thin to provide much protection.
And the fingers a also have to bend, which means you have even less space for material, and even less protection.
It would make much more sense if the gloves of the iron man suit were like mittens, with all the fingers in one chunk. Then you can put strong layers of metal around all the fingers at once.
I had a dream in which I considered tweeting to ask Dick Grayson why he became a police officer, when he was already Nightwing (which is kind of a substitute for a police officer).
But then I realized that I couldn't do that because it would reveal is secret identify.
Only later did I realize that I couldn't do that because it Dick Grayson is fictional.
But nevertheless, I am still left with the original question. Wouldn't it better to put your resources into one crime-fighting profession or the other?