Note that I didn't really read and participate at the time, but I did talk to many of the relevant people in person.
I don't speak for anyone and I'm sure there will be lots of individuals that don't really fit the narrative I'm about to tell.
But I think this narrative does help contextualize LessWrong's reactions to stuff in the meditation and "woo" space.
Counter-narratives and conflicting data points welcomed.
So, in 2017, Val (@Morphenius) went into the desert and had a kensho / hit stream entry (whatever that means.)
That was influencing his thought and curriculum a lot.
As part of this he wrote a post that was about a flavor of epistemic puzzles, that was inspired by / about the problem of trying to communicate the insight of Kensho.
LessWrong, as fictionalized aggregate entity, didn't like this.
Dramatized:
Val: "So, I had an enlightenment experience, and I've been thinking about the difficulty of figuring out how to think about how to discover insights that are outside the scope of your deepest frames and-"
LW: "I'm VERY skeptical of enlightenment."
Val: "Um. Ok. I'm not really trying to talk about enlightenment per se, I'm trying to point at an epistemic puzzle. Epistemic puzzles are what we do here right?"
LW: "Why should we believe you that you're Enlightened? I don't see any any evidence."
Val: "Well here's some stuff."
LW: "That is weak sauce evidence, and I am unconvinced. You pattern-match to cult leader."
Val: "O-Ok."
Here's what I think was happening here, when Val tried to broach this topic, he did something that was a culture clash and almost like a norm violation.
When he said "I had a Kensho", what LW heard was "I have unquestionable knowledge."
"I know something that you can't verify"
This triggers an alarm for collective-LW, because it sounds like a status claim. Indeed, LW DOES allocate status in large part on the basis of knowledge.
But _claiming_ to have knowledge that you don't have access to. That's claiming to deserve status without backing it up.
One might look at this situation and be call it a trauma response on LW's part. Like it is having an extreme reaction because it is critically bad if there is knowledge in the world that doesn't fit in the LW frame.
But also, I think that LW is basically right here.
(Given its constraints.)
Like, it is an intellectual ecosystem that is really good at a certain kind of analysis. And that ecosystem functions on the basis of norms like "you don't get to claim status for secret knowledge. You have to show your work."
Failing to uphold that norm would disrupt the ecosystem's ability to function.
Like what happens if LW lets this slip by? Now every time Val disagrees with someone, he can claim to have access to a higher truth that, unfortunately, the poor unenlightened ones can't grasp?
No. That's not how it works around here. We only give credence to ideas that win on there merits of argument.
Not charisma. Not seniority in meditation. Not anything.
"Nullius in verba."
If you allow the possibility of knowledge that can't be verified from the 3rd person perspective, you break the norms that allow the ecosystem to work.
LW doesn't have the infrastructure to also be good at 1st person perspective, without giving up on the epistemic norms that make it great at 3rd person.
The 1st pp is in LW's shadow.
(On first pass, the only way to build comparable institution that can do good epistemic work on phenomena from the 1st person perspective is to have all the members of that institution have training in standardized and reliable introspection methods, so that they can verify...
...results for themselves.
Maybe other people have better ideas.)
So, I posit, Val violated a norm, by claiming to have important insight, without substantiating that insight.
(You can kind of see this in some of his comments as well. I don't think he meant them this way, but you could read them as lording over others with claimed epistemic superiority.)
I think, in principle, he could have written a different post, that did not accidentally make a status claim, and did not violate that norm.
In fact, I think (though I don't claim to know what point Val was getting at, so maybe I'm off base here) that @slatestarcodex did manage to write something pretty close to exactly that!
That post (I think) points a similar or identical problem, without making any claim of secret, impossible-for-the-uninitiated-to-verify knowledge.
Anyway, I think that this is a useful frame to keep in mind.
Also, of course, there are lots of people who dismiss anything that looks like woo, out of hand.
But I think that for many LW rationalists, it isn't that meditation or whatever is obviously fake, its that there's a norm against letting people get away with claiming epistemic superiority on the basis of knowledge that can't be easily verified.
@Morphenius, my guess is that you don't have much desire to wade into this again, but feel free to chip in if you feel like I've misrepresented you.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1) Society is stratified. Some people are in fact much better off and afforded real privileges and opportunities that others have less access to. Those privileges are an existence proof that "society" can be like a beneficent parent to at least some people.
So it seems like one way that the world could go is:
- China develops a domestic semiconductor fab industry that's not at the cutting edge, but close, so that it's less dependent on Taiwan's TSMC
- China invades Taiwan, destroying TSMC, ending up with a compute advantage...
...over.the US which translates into a military advantage
- (which might or might not actually be leveraged in a hot war).
I could imagine China building a competent domestic chip industry. China seems more determined to do that than the US is.
So, my short summary of planet earth is 1) we're building superintelligence without knowing what we're doing and 2) we're torturing ~100 billion non-human animals every single moment.
The moral scale of those things is so large as to dwarf pretty much everything else.
There are a few other things that matter, but mostly because they impact one of those two things.
But I think maybe I should be seriously considering that training / running ML models is painful, as a third thing on the list?
I don't think it's remotely comparable to factory farming in terms of scale of suffering yet. But it's hard to tell when we'll cross that line, because it's hard to compare them with brains.
Does anyone know what the argument is for the retention breath segment of Wim Hof breathing is?
It seems to me, based on the proposed mechanism, you should get all the benefits (and more so?) from straight up hyperventilating, without any breath hold at all.
Does anyone know the claimed reason why holding your breath helps?
The argument that I've heard is that this causes a build up of CO2, which increases Oxygen Absorption by the cells.
And that's not...totally unreasonable. Higher acidity DOES cause hemoglobin molecules to release more oxygen on average.