Kaj Sotala Profile picture
Jan 7, 2019 17 tweets 7 min read Read on X
Writing a new sequence on LW. My modest goal is to combine models from neuroscience, psychotherapy, meditation, game theory, and a bunch of other places, into a better model of how our minds work than the one suggested by our normal folk psychology. lesswrong.com/posts/M4w2rdYg…
First substantial post in my "multiagent models of mind" sequence, starts off by summarizing some essential points of "Consciousness and the Brain", and explaining Global Neuronal Workspace theory. lesswrong.com/posts/x4n4jcoD…
Second substantial post in my "multiagent models of mind" sequence, builds up to the #InternalFamilySystems model of mind by talking about how we might design a robot which behaved quite similarly as the IFS model predicts. lesswrong.com/posts/5gfqG3Xc…
Third substantial post in my "multiagent models of mind" sequence, I extend the model I've been building up to explain some things about change blindness and mistaking our thoughts as objective facts, and connect it with the book #TheMindIlluminated.
lesswrong.com/posts/AhcEaqWY…
Fourth substantial post in my "multiagents of mind" sequence: if we are composed of a society of subagents, then how come we manage to have relatively coherent behavior most of the time? When does this succeed and when does it fail? lesswrong.com/posts/oJwJzeZ6…
5th substantial post in my "multiagents of mind" sequence. If parts of our minds disagree, how do we get them to agree? Why don't they agree by default? Are there situations in which they refuse to talk with each other? Are they subagents or just beliefs? lesswrong.com/posts/hnLutdvj…
6th substantial post in my "multiagents of mind" sequence. I flesh out the concept of consciousness implementing a virtual Turing machine a bit more, and then apply it to examples such as emotion suppression, internal conflict, and blind spots. lesswrong.com/posts/7zQPYQB5…
7th substantial post in my "Multiagent Models of Mind" sequence, on how trauma shapes our thinking in invisible ways, and how healing one's emotional stuff is a prerequisite for rationality. lesswrong.com/posts/u5RLu5F3…
8th substantial post in my "Multiagent Models of Mind" sequence, connecting it with psychology's dual-process models and reframing "System 2" as working-memory augmented collaboration between subagents. lesswrong.com/posts/HbXXd2gi…
I summarize a book which claims to provide a theoretical model for how lasting emotional transformation (as well as any genuinely effective therapy) works. Its model seems very promising, I think. lesswrong.com/posts/i9xyZBS3…
Meditation has been claimed to have all kinds of transformative effects. I offer an explanation for one mechanism: increasing a person’s introspective awareness, leading to increasing psychological unity as internal conflicts are detected and resolved.
lesswrong.com/posts/WYmmC3W6…
Here is the introduction to a series of posts where I try to explain insight meditation, enlightenment, and particularly the Buddhist three characteristics of existence in a secular, non-mysterious way.
lesswrong.com/posts/Mf2MCkYg…
In this post, I start explaining what is going on with the Buddhist notion of "no-self" and what's up with meditation that affects the sense of self, as non-mysteriously as I can make it. (Spoiler: the self is like Google Maps. Well, kinda.)
lesswrong.com/posts/W59Nb72s…
Here I started outlining my model of what's going on with suffering, and how it relates to the cognitive science theory of predictive processing and Buddhist teaching about craving and attachment.
lesswrong.com/posts/gvXFBaTh…
Buddhists talk a lot about the self, and also about suffering. They claim that if you come to investigate what the self is really made of, then this will lead to a reduction in suffering. Why would that be? Here's my take... lesswrong.com/posts/r6kzvdia…
... the brain constructs a story of there being a single decision-maker. And while the story tends to be closely correlated with the system’s actions, the narrative self does not actually decide the person’s actions, it's just a story of someone who does.
lesswrong.com/posts/h2xgbYBN…
Few people would seriously claim that either physical things or mental experiences last forever. However, there are ways in which the fact of impermanence does contradict the brain's intuitive, built-in assumptions. lesswrong.com/posts/T8gD9mRD…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kaj Sotala

Kaj Sotala Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @xuenay

Jun 5, 2023
(Part I.) Sometimes I find my thoughts looping through the same motions.

I'm thinking of where to go to eat. Different options pop into my mind, none of them perfect.

"Ugh, that place is expensive"
"Ugh, that place has no good vegetarian options"
"Ugh, that place is far away" Image
After a while, I notice my mind having bounced back to the beginning of the list.

"Ugh, that place is expensive..." Image
If pay close attention, I may notice a slight sense of... bouncing off, each time I think of an option.

It’s as if each ugh is surrounded by a shell that deflects my mind, preventing my mind from fully processing the ugh. Image
Read 15 tweets
Jan 28, 2023
1000 hours of formal recorded meditation since January 18, 2018.

Doesn't include: probably a similar amount of unrecorded semi-formal meditation, a hard to estimate but significant amount of "off-the-couch" practice, practice I did after 2009 before starting to use this app. Image
(Note that this screenshot has been slightly edited, since for some reason the "average per day" number it actually shows me is twice what it should be; the correct amount is 33.1 minutes [I couldn't be bothered with editing that last digit].)
Several people asked about the effects

It's a difficult question. I'm sure my mind is significantly different now than before, but effects come gradually so it's hard to remember how things were before. (I have a history of forgetting even huge changes: kajsotala.fi/2015/08/change… )
Read 15 tweets
Jul 18, 2022
I was feeling rushed this morning. It wasn't that I had any real urgency, but I want to get a reasonable amount of work done today, and I'd been having a slow start for the day.
Besides work things, there were also several personal things that I needed to get done, and I was feeling an acute ugh that argh I need to do that and I need to do this and why didn't I do anything yesterday and now I'm going to feel rushed for the rest of the week again.
Then I remembered that the feeling of urgency isn't a fact about the world, it's a fact about my own mind.
Read 10 tweets
Jun 26, 2022
Thread of my favorite game intro cinematics
Battletech.

600 years of future history compressed into two minutes. Gives me cold shivers each time.
Deus Ex.

Talk about foreshadowing and getting you interested in what's going to happen. (And all that stuff about a worldwide plague lands even more strongly than it did before COVID.)
Read 13 tweets
Jun 25, 2022
I was recently asked how literal I consider the Internal Family Systems model of your mind being divided into "parts" that are kinda like subpersonalities

Short answer: more than just metaphorical, but also not as literal as you might think from taking IFS books at face value
I do think that there are literally neurological subroutines doing their own thing that one has to manage, but I don't think they're literally full-blown subminds, they're more like...
clusters of beliefs and emotions and values that get activated at different times, and that can be interfaced with by treating them _as if_ they were actual subminds
Read 23 tweets
Apr 8, 2022
Holy -

I'm getting cold shivers reading this paper

They made ML models trained on different modalities but all taking language input/output, to reason together by using language as a common representation

This is getting so close to "parts" models of the mind it's scary
My internal linguistic representation while reading this paper:

"Oh fuck"

"Whoa"

"Oh my god"
It does stuff like having a visual model output a location ("front porch") and items detected in that scene ("package, driveway, door"), and then have a language model like GPT-3 output a description of what it implies the system is doing ("I am receiving a package")
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(