Eli Tyre Profile picture
Sep 22, 2020 26 tweets 5 min read Read on X
Here's my take on what was happening around those LessWrong threads about meditation a few years back.
Note that I didn't really read and participate at the time, but I did talk to many of the relevant people in person.

I don't speak for anyone and I'm sure there will be lots of individuals that don't really fit the narrative I'm about to tell.
But I think this narrative does help contextualize LessWrong's reactions to stuff in the meditation and "woo" space.

Counter-narratives and conflicting data points welcomed.
So, in 2017, Val (@Morphenius) went into the desert and had a kensho / hit stream entry (whatever that means.)

That was influencing his thought and curriculum a lot.
As part of this he wrote a post that was about a flavor of epistemic puzzles, that was inspired by / about the problem of trying to communicate the insight of Kensho.

LessWrong, as fictionalized aggregate entity, didn't like this.
Dramatized:
Val: "So, I had an enlightenment experience, and I've been thinking about the difficulty of figuring out how to think about how to discover insights that are outside the scope of your deepest frames and-"

LW: "I'm VERY skeptical of enlightenment."
Val: "Um. Ok. I'm not really trying to talk about enlightenment per se, I'm trying to point at an epistemic puzzle. Epistemic puzzles are what we do here right?"

LW: "Why should we believe you that you're Enlightened? I don't see any any evidence."
Val: "Well here's some stuff."

LW: "That is weak sauce evidence, and I am unconvinced. You pattern-match to cult leader."

Val: "O-Ok."
Here's what I think was happening here, when Val tried to broach this topic, he did something that was a culture clash and almost like a norm violation.

When he said "I had a Kensho", what LW heard was "I have unquestionable knowledge."

"I know something that you can't verify"
This triggers an alarm for collective-LW, because it sounds like a status claim. Indeed, LW DOES allocate status in large part on the basis of knowledge.

But _claiming_ to have knowledge that you don't have access to. That's claiming to deserve status without backing it up.
One might look at this situation and be call it a trauma response on LW's part. Like it is having an extreme reaction because it is critically bad if there is knowledge in the world that doesn't fit in the LW frame.

But also, I think that LW is basically right here.
(Given its constraints.)

Like, it is an intellectual ecosystem that is really good at a certain kind of analysis. And that ecosystem functions on the basis of norms like "you don't get to claim status for secret knowledge. You have to show your work."
Failing to uphold that norm would disrupt the ecosystem's ability to function.

Like what happens if LW lets this slip by? Now every time Val disagrees with someone, he can claim to have access to a higher truth that, unfortunately, the poor unenlightened ones can't grasp?
No. That's not how it works around here. We only give credence to ideas that win on there merits of argument.

Not charisma. Not seniority in meditation. Not anything.

"Nullius in verba."
If you allow the possibility of knowledge that can't be verified from the 3rd person perspective, you break the norms that allow the ecosystem to work.
LW doesn't have the infrastructure to also be good at 1st person perspective, without giving up on the epistemic norms that make it great at 3rd person.

The 1st pp is in LW's shadow.
(On first pass, the only way to build comparable institution that can do good epistemic work on phenomena from the 1st person perspective is to have all the members of that institution have training in standardized and reliable introspection methods, so that they can verify...
...results for themselves.

Maybe other people have better ideas.)
So, I posit, Val violated a norm, by claiming to have important insight, without substantiating that insight.
(You can kind of see this in some of his comments as well. I don't think he meant them this way, but you could read them as lording over others with claimed epistemic superiority.)
I think, in principle, he could have written a different post, that did not accidentally make a status claim, and did not violate that norm.
In fact, I think (though I don't claim to know what point Val was getting at, so maybe I'm off base here) that @slatestarcodex did manage to write something pretty close to exactly that!

slatestarcodex.com/2015/04/21/uni…
That post (I think) points a similar or identical problem, without making any claim of secret, impossible-for-the-uninitiated-to-verify knowledge.
Anyway, I think that this is a useful frame to keep in mind.

Also, of course, there are lots of people who dismiss anything that looks like woo, out of hand.
But I think that for many LW rationalists, it isn't that meditation or whatever is obviously fake, its that there's a norm against letting people get away with claiming epistemic superiority on the basis of knowledge that can't be easily verified.
@Morphenius, my guess is that you don't have much desire to wade into this again, but feel free to chip in if you feel like I've misrepresented you.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eli Tyre

Eli Tyre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpistemicHope

Feb 6
All this sounds basically right to me, and, comports with my own (not yet published) personal ethics, modulo that (to speak in the language Scott is using here, which is not my usual language) we TOTALLY have implicit obligations to animals.

Why wouldn't we?
"If animals are conscious...we probably don't have obligations to them, because we never signed any treaties."

...but also...
"You're obligated to take care of your kids, because you accepted an implicit promise to do so by giving birth to them."

...is a very weird juxtaposition.
Read 20 tweets
Jan 26
This was fascinating (and slightly horrifying). I'd love to read more accounts of the sociological-economic dynamics of "worlds" that I have little exposure to.

aella.substack.com/p/how-onlyfans…
This part in particular was thought-provoking.

"Feminine norms" are at least partially rooted in female psychology, but they're also just an adaption to being on the more-in-demand side of a competitive market with non-fixed supply, that thrives on impulsivity. Image
The non-fixed supply and then impulsivity are both important to get feminine norms.

Landlords are on the more-in-demand side of their markets, but they respond to that by charging higher rents. That's not enough to create feminine norms.
Read 8 tweets
Jan 14
FYI, humans can learn to notice the feeling of confabulating or rationalizing, with a little bit of practice.

You do have to have an honest interest in noticing when it's happening though.
It helps for building the habit if you make an unobtrusive but distinct gesture every time you notice it.
One common form of rationalization for me is what I call "telling stories", where I'm justifying a feeling or position I'm holding to some (often imagined, sometimes in-person) audience.

This feels notably different from simply explaining what/why I'm feeling or what I think.
Read 5 tweets
Jan 9
In the GTF (if we get there), we'll regularly do mental operations that take thousands of symbols.

We'll think it is utterly bizarre and horrifying that the biological bootloader beings (us) could only only do mental operations on ~4 symbols at a time.
This is an insane bottleneck.
How many thoughts are we not able to think, because they would require consciously holding in mind the specific relationships between just _10_ concepts, where you can't do it by chunking because the way each concept relates to the others depends all the rest?
Read 8 tweets
Jan 8
I am very libertarian, but have become somewhat more conservative overtime in this sense:

I think it sensible for "society" to try to set social default norms that are healthy and sustainable.

But there HAVE to be ways for people to opt out of that if it doesn't work for them.
In fact, those things go together.

If there are ways for people to quietly opt out of the defaults, they don't have to rebel against those norms to create space for themselves to live lives that work for them.
I could totally imagine that poly works badly for most people society would be better off if it were generally socially discouraged.

But some people are obviously-to-me very dispoistionally poly—it actually does work better for them.
Read 9 tweets
Jan 5
I consider myself to "do philosophy", though what I mean by that has very little to do with academic philosophy or the "great philosophers" who I agree are mostly bad (with a few exceptions), except as examples of how different one’s worldview can be from what I take for granted.
By "philosophy" I mean "reflecting on the abstractions we use to make sense of and act in the world."
Philosophy is the domain that involves reflecting _on_ abstractions, reasoning about whether and where a particular abstraction is correct or useful, or whether and where a different abstraction would be better, etc.
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(