My Authors
Read all threads
Having the thought “I should do X” does not logically imply that “I do X” is true. Not even a little bit. Nor does having that thought *cause* you to do X. Ever. No matter how hard or how frequently you think it.
Hypothesis for how intentional behavior change can actually work: how we *ever* manage to get from “ought” to “is”:
You have the thought: “I ought to do X.” Then you zoom in on X. What is X? What would it mean to do X?
You mentally simulate what doing X might entail. (“In order to do intermittent fasting, I’d first have to decide which meal to skip. One option is skipping breakfast.”)
Now you have a mental model of what X is, applied to your own potential action. Ie, in an “egocentric” coordinate system.
If X is literally a single motion, like lifting your arm, you can feel this representation as sort of a “ghost motion.” Before you move your arm, you have a kinaesthetic simulation of what it would feel like to move your arm.
I claim it is *literally impossible* to move your arm without such a simulation. (The simulation may happen so soon before the motion that you don’t notice, but meditation makes it more noticeable.)
I also claim that the “ghost motion” before moving your arm, and the “how WOULD I go on this diet” simulation are two instances of the same kind of thing.
Both of these claims come from personal introspection, but the “simulated movement precedes movement” iirc has support from neuroscience. Also, studies show athletes improve performance from *mentally simulating* doing sports, and iirc pro athletes actually do visualize.
The implication is that it is also literally impossible to go on a diet without mentally simulating *how* you would go on a diet *should* you wish to.
It is obviously true, but not at all controversial, that you can’t go on a diet if you literally don’t know how. That’s not the point I’m making.
The point is, since humans are not logically omniscient, that just because you know the declarative fact “Intermittent fasting consists of only eating in an 8-hour window” doesn’t mean you have *created the plan*
“If I were to do intermittent fasting, when I woke up I would make myself coffee but not breakfast.” + whatever nonverbal simulation is necessary to “prepare to do it.”
In the psychological literature these are called implementation intentions, and lots of studies claim they work better than baseline for forming new habits.
When thinking “I should do X” actually causes you to do X, my hypothesis is that the “should” doesn’t cause action directly; it’s the prompt to *think about X*.
I’ve noticed that often I don’t want to open my terminal window to start writing code. But if I ask myself, “If I *did* code the next part of this project, where would I start?” And once I answer the question, I *do* want to start.
Simulating what you *would* do, *if* you chose to, creates a menu of available actions that you literally did not have queued up in your conscious mind before.
And now that you have the menu in front of you, your (fast, intuitive, maybe-dopaminergic) reward-seeking mechanism *is drawn to one of these options*, which enables you to take them.
It is literally impossible to *override* an algorithm’s reward function; that’s tautological! What an algorithm *can* do is certain internal modeling processes that change its perceived menu of available options.
Now, what happens if you have the thought “I should do X” (or hear someone else say “You should do X”) and you *don’t* simulate what that would mean if you were to do it?
At best, it remains a mere verbal phrase that you can recite but you do nothing further with it. “In one ear and out the other.”
At worst, you interpret “You should do X” to mean “Instantaneously cause it to already be the case that you are doing X.”
This is literally impossible. You can obey a “should” quickly in clock time *if you do the simulation*. Musicians following a conductor’s baton can follow instructions virtually in real time.
(Actually, musicians probably don’t calculate the “egocentric coordinates” on the fly; they probably built the map from “sight of conductor” to “hand motions” through practice, and have a special-case cached map they can retrieve near-instantaneously.)
But you can at least *relatively quickly* transition from being told “please take the garbage out” to taking the garbage out. It can be a few seconds. It just has to include actually simulating *how* one takes the garbage out.
Trying to believe a logical contradiction, I think, is *the* source of suffering.
(This is a standard psychologization of Buddhism, it’s a tenet of Critical Rationality, and it also matches my introspective experience.)
An untranslated “should” introduces a logical contradiction! It is saying “(cause it to be true that) you are doing X” when observably you are not doing X!
This is why people can sometimes see criticism/feedback as an attack. Literally *all* criticism, if untranslated, is a commandment to do the literally impossible.
But why would you ever fail to translate feedback? Most people, if they ask you to take out the garbage, don’t mean to say “do it literally instantaneously in a physically impossible fashion.” So why get defensive as *if* they meant that?
One hypothesis: we have bad memories of people who expected obedience faster than we literally could obey at the time, or of demands that were literally impossible to fulfill even *after* simulating them.
I usually use religious commandments as what feel like clear cut examples of instructions that are definitely impossible to obey and yet intended to be obeyed; but other people claim that’s not true, so I’m not sure.
I’m very confident that the Talmud (which i’m trying to learn cover to cover) describes behaviors as admirable which would be impossible or unwise to attempt (like sleeping 0 hours per night)
Anyhow, I’m inclined to believe that there are, or have been, *any* people who demand the impossible, and actually meant that, not something more reasonable.
But okay, if there *are* people who ask the impossible or unreasonable, why should that cause suffering? Why not just reject all impossible demands?
To explain this, I have to posit some inherent limitation in what thoughts are possible, and that makes my model more complicated & so less credible, for occam’s razor reasons. Hmm. I’m stuck.
“Some people demand the impossible” should lead to the update “demanding the impossible is a thing people sometimes do”, but I don’t see why it overcorrects to “all feedback should be interpreted as a demand to do the impossible.”
Hypothesis 1: there is an incredibly prevalent, all-pervading meme, that instantaneous obedience is possible, and even that the function of language is literally to *cause* (with no intervening thought) behavior in another person. This is literally what B.F. Skinner said.
Likewise there are things like Bernays’ Propaganda that claim that we can literally be manipulated directly by outside forces. There are popular Evangelical parenting books that say “delayed obedience is disobedience.”
Perhaps, lots and lots of people believe (erroneously) that instantaneous obedience is possible, and tell you this SO MUCH that it outweighs the evidence of your own experience that it’s impossible.
This causes you to be another person who believes instantaneous obedience is possible, so you perpetuate the meme yourself, and the cycle continues.
(Here I’m using the assumption that you assume “someone said X” is weak evidence for X.)
Hypothesis 2: as in 1, perhaps you’re getting an overwhelming number of signals starting from birth that teach you that instantaneous obedience is possible, but it’s *not* because lots of people hold that (false) belief.
Rather, you’re seeing signals all the time that are like the conductor’s baton: they’re meant to be obeyed instantaneously based on a cached, pre-trained “ghost motion” or “implementation intention.”
The people sending these signals are not mostly deluded; they correctly anticipate that their intended audience knows how to obey. Actual conductors aren’t *wrong* to use batons.
The problem is that you see tons of signals for which you are not the intended audience! So *from your perspective*, the world is full of people making incomprehensible demands of the world at large, which necessarily includes you.
(This is made worse when you can eavesdrop on conversations you weren’t invited to; so, eg, social media, print media, and agoras/public physical spaces as well as travel and diverse cities.)
So, if you see enough signals not aimed at you, you may come to believe that instantaneous obedience *without training* is possible, when what’s actually going on is that instantaneous obedience is possible *with training*.
In this model, belief in instantaneous obedience doesn’t have to be overwhelmingly widespread in order to propagate itself; it can ride on the coattails of tons of “innocent” (=not based on falsehood) baton-signals.
(These aren’t all the possibilities, to be clear; just generating a few plausible ideas.)
it's also possible, as @selentelechia suggested, that people get trained on *clumsy* signals; e.g. if people get mad every time you don't instantaneously do something you don't know how to do, you might infer that this means "instantaneous obedience is obligatory"...
@selentelechia even though the people who got mad at you *didn't* believe that instantaneous obedience is obligatory. Maybe you need a *few* sources of authoritarian ideology to promote the hypothesis to your attention, but *mostly* you're being trained on people's unintentional signals.
@selentelechia Anyhow. Instantaneous obedience is *impossible.* Not "evil" or "tyrannical"; it literally doesn't exist. You *can't* do what you're told directly. You *have* to map it to how you would do it *first*, and then your reward function has to be drawn to it.
@selentelechia Other people can't "make you do things." What other people can do is *make you suffer*. They can promote hypotheses to your attention, and the right stimuli in the right order *can* tie you in a knot of trying to believe two contradictory things at once.
@selentelechia There's a (false, IMO) belief you might call "descriptive authoritarianism" -- the theory that people *can* make other people do things, that instantaneous obedience or direct manipulation is possible.
@selentelechia There's also a (probably false?) belief you might call "descriptive individualism" -- the theory that other people, or external circumstances, can't have *any* effect on your mind that you can't undo, in one motion, "at will".
@selentelechia What external circumstances can do is *insert a thing in your awareness*. "I am hearing the phrase 'You should do X.'" You don't get to choose this, I think; it's thrust upon you. Which means that contradictions can be inserted into your "workspace" of awareness.
@selentelechia You can *resolve* contradictions; if you successfully explain away, make sense of, resolve, the temporary contradiction, you can stop suffering. But you may or may not actually do this. Other people can cause you suffering; you may or may not know how to remove it.
@selentelechia To tie back to the previous thread on trauma; certain flawed/suboptimal/irrational/etc patterns of thought and behavior are *not inevitable* -- it is false that they are a necessary part of the human condition -- but also, IMO, *not instantly resolvable upon request.*
@selentelechia You can't just ask someone "stop being fucked up, please", I think. They *literally can't.* Not as in, "it is impossible for anyone not to be fucked up", but "it is impossible for this person to snap out of it instantly just because you asked."
@selentelechia There has to be a *map* of what it would look like to "function well" -- not just at the macro level of "what does a virtuous person look like throughout their life" but "what would being in a good mood look like for me right now".
@selentelechia It's not that the fucked-up person literally doesn't ever have the capacity to reason, be calm, reflect, etc. But saying the words "be reasonable!" is *not the correct spell to invoke sanity*.
@selentelechia I have very rough intuitions about what the invoking spell actually might be, but I have the sense that it's kind of like the "sensory trick" or like entrainment in Parkinson's? In a motor disorder you can "forget how" to do a motion, but can be "reminded how" with a prompt.
@selentelechia You can get back into a sane and well resourced state by getting "off the ground" or getting a "boost" by doing it in a context where it's easier, or by social imitation of someone doing it. It can be easier to sidle in "accidentally" than to try head-on. etc.
@selentelechia Is this "coddling"? Meh. Maybe. If you think "not coddling" (i.e. JUST demanding reasonableness directly) works, I'm curious to hear either anecdotes or data about this.
@selentelechia If a person won't "be reasonable" when asked, they either actually, at the attention-reward-function level, don't want to be reasonable (which I tentatively believe isn't a real possibility, but who knows) or they don't have a currently available path/map to being reasonable.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Sarah Constantin

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!