My Authors
Read all threads
Some people suggested I write a brief "twitter trailer" of my model combining enlightenment/Buddhist theory with predictive processing, so yeah that could be useful, let me try explaining a part of it. Let's go with the minor one of "what exactly _is_ suffering?".
For those not familiar, predictive processing is a theory whose core claim is that the brain keeps generating predictions of what it is going to experience, and then tests those against the incoming sensory data. If prediction was wrong, it revises the models that generated it.
That's the super-concise version, but the theory has lots of additional details. @slatestarcodex has a nice summary of some of its implications at slatestarcodex.com/2017/09/05/boo… ; I'm also mostly relying on the same book for my model of predictive processing.
(Caveat: I haven't actually finished reading the book yet, which is why this is the preliminary tweetstorm explanation and not my actual article on the topic. But I expect the big picture to be right, even if some details might be off.)
One of the things the book talks about is binocular rivalry (BR). It's a setup where you are shown one picture (e.g. a face) in your left eye, and another (e.g. a house) in the right. People report that their experience keeps alternating between seeing a face and seeing a house.
The way PP explains this is, the brain is trying to make sense of what it is seeing: is it a house, or is it a face?
Sometimes people experience a mashup of the two, but it quickly breaks apart: the brain knows that faces and houses do not exist as occupying the same place at the same scale at the same time, so the prediction of "face/house mashup" is rejected as nonsensical.
The alteration reflects the existence of two sensible (under normal circumstances where your eyes are not fed unnatural data) hypotheses. Either you are seeing a house, so the brain predicts you'll continue seeing a house. Or you are seeing a face, so you'll continue seeing *it*.
Suppose the brain settles on "this is a face". Now the prediction of "I will see a face" gets strongly contradicted by the input of "I see a house" coming from the right eye; there's a big error signal caused by the hypothesis/input mismatch. Meaning the hypothesis must be false.
So the brain abandons the "this is a face" hypothesis, and settles on the alternative one that it can find: "this is a house". The prediction now matches the house input coming from the right eye, so the error signal from the right eye disappears. Correct hypothesis found!
Except, if the brain was seeing a house, it should normally be seeing a house with both eyes. The left eye is *not* seeing a house. This kicks off another stream of error signals, indicating that the "this is a house" hypothesis is being contradicted by the left eye's data.
So the "this is a house" interpretation is rejected as false, too. In the absence of any better idea, the brain settles back on the "this is a face" hypothesis. Which eliminates the error signal from the left eye, but then restarts the error signal from the right. And so on.
At this point you are probably asking, "so what does this have to do with suffering?". Getting there! But there's one more detail we need to cover first.
PP claims that physical actions are also neurally represented as beliefs. You predict that the best thing to do in this circumstance is writing a tweet. Then the brain seeks to confirm that hypothesis by writing a tweet and checking whether the results are as expected.
Words like "hypothesis" and "belief" are pretty passive, disembodied. The vibe I get from reading Clark is different. Hypotheses feel like active forces. A hypothesis of what you should expect to see, seeks to confirm itself by moving your eyes so that you see it. Or it will die.
(If the notion of your brain being filled with entities that are something like beliefs and something like active agents sounds vaguely familiar, I may have written about something similar before... lesswrong.com/s/ZbmRyDN8TCpB… )
Specifically, belief and actions blend together, in that a belief may rewrite your subjective reality to make itself true. It makes you believe that you are going to write a tweet, and then your brain adjusts itself to make the belief come true, by making you write a tweet.
Buddhism talks about craving, and attachment to outcomes, including mental states. It is said that craving tries to create beneficial outcomes, but in a way that actually causes the suffering it is trying to avoid.
I think that craving to achieve or avoid a particular outcome, is a specific kind of belief (in the PP sense). It is associated with a subsystem which can inject "priority overrides" into motivation, and does this by rewriting subjective reality so as to achieve the craving.
(I think of this as somewhat similar to the AI design explained at owainevans.github.io/blog/hirl_blog… . You have one system just dispassionately figuring out what to do next; and then a higher-priority one which tries to force particular outcomes that are judged particularly good or bad)
So to me, a typical case of mental suffering is one where it feels like I am stuck between two unacceptable or impossible options. Suppose I've done something I shouldn't have, and am feeling guilty. I want to admit the truth to my friend, but also know they will be angry at me.
My thoughts go something like: if I were to tell my friend the truth, then I wouldn't feel guilty anymore. For a second this seems promising. But then I realize how angry they would get, and my stomach lurches. No, I should stay quiet. But then I start feeling really guilty.
Buddhism might say that I am driven by two opposing cravings. On one hand, I have craving to avoid the feeling of guilt. On the other, I have craving to avoid the feeling of people being angry at me. Both cravings are trying to do good, but their conflict causes suffering.
In my PP-inspired framework, the cravings are beliefs that are trying to rewrite part of my sense of reality in order to fulfill themselves. One of them sends an input saying "my friend will not be angry at me". Another sends an input saying "I will not feel guilty".
My brain treats these inputs as something like sensory data, and then tries to find a hypothesis (action or a possible world) that would fulfill them both, after which it would take action to fulfill *that* hypothesis.
There's the input from the craving not to feel guilty, saying "I will not feel guilty". The brain searches for a possible world that would fulfill this condition: admitting the truth to my friend.
It checks the predicted consequences of this: I would stop feeling guilty. This matches the input of "I will not feel guilty" coming from the guilt craving, so the brain settles on this hypothesis of what to do. But this then causes the prediction that my friend will be angry.
That triggers the craving not to be yelled at, which quickly leaps into action. It tries to fulfill itself by inserting the input "nobody will be angry at me" as a prediction of what will happen.
The hypothesis of "I'll tell the truth" now causes an error signal when matched against the input of "nobody will be angry at me". That destabilizes the original hypothesis, causing a search for a better one. The brain finds "I will stay quiet".
That silences the error signal coming from "nobody will be angry at me" - if I stay quiet, indeed nobody will be angry at me. But then the craving not to feel guilty, is still sending the input of "I will not feel guilty". And my brain expects to feel guilty if I stay quiet.
So this causes a mismatch and error signal between the hypothesis of "I will not feel guilty" and the input of "I will stay quiet about what I did". It destabilizes the hypothesis of "I will stay quiet".
My general world-knowledge prevents me from finding an outcome which would avoid me feeling guilty _and_ my friend getting upset at me; my brain can't find a plausible scenario in which both would be realized. So it goes back to the hypothesis of "I will tell the truth"...
Let's take another example: physical pain. There's a strong pain signal in my consciousness. Now, pain is not necessarily unpleasant: one may even enjoy it, such as in the case of hot baths, spices, or sexual masochism. It's when we *resist* pain that there's an issue.
In other words, when we have a craving to be rid of the pain. This again manifests as the craving rewriting reality to realize itself: it inserts the prediction that the pain will cease. The brain then attempts to find a scenario which will fulfill this prediction.
Maybe I could take a painkiller, or see a doctor, or distract myself from the pain, or something. I might find something that my brain judges to be plausible, so I start taking actions towards it.
Subjectively, in situations like these, it feels like my attention keeps alternating between two thoughts: the pain itself, and the thought of the thing that I think will bring me relief.
In the PP model, "attention" is roughly that part of the sensory data which we is most useful for the task at hand, and which has the highest estimated precision; because this part of the data is the most useful, the mind weights that more than the rest.
Pain tends to capture attention; I think (not having read the whole book) it could do that just by being hard-coded to have very high precision. But what happens when a craving causes you to have the belief that doing something will make the pain go away?
Well, you need to have feedback on whether or not that is succeeding. So the presence of the pain signal, is particularly relevant for seeing whether this belief is helping with the pain. It's task-relevant data with high precision, causing your attention to go to it...
So the craving is creating a task to be rid of the pain, and the belief that you are about to get medicine that will help. As you are going to get the medicine, this causes there to be more attention on the very pain you are trying to avoid.
Also, the way the act of rewriting reality works, is that you believe "I get the medicine into my hands right now and it will ease the pain". That is a belief which is in the present; it is contradicted by the fact that you are still having the pain.
(A case that would be more analogous with the mental suffering case would be one where you knew that there was no relief to the pain to be had, but still had a strong craving to be rid of it.)
I feel like I'm not explaining this as well as I want to, but that's what you get for reading a stream-of-consciousness Twitter trailer rather than my finished article. Anyway...
The way I framed especially the mental suffering, it was structurally equivalent to binocular rivalry. But note that BR does not feel particularly unpleasant, AFAIK. If both involve the brain looking for a hypothesis that fulfills constraints, why is one unpleasant and other not?
Also, the standard teaching is that _discernment_ does not lead to suffering, but rather _craving_ does. You can judge some outcome to be better than another, and take actions to achieve it, but that by itself does not cause suffering. It's only when you get attached to outcomes.
The thing with BR is that it is just trying to neutrally figure out what the most likely hypothesis is: am I seeing a face or a house? The system has no intrinsic preference towards either one, it is just checking which one fits the facts best.
Likewise, it is possible to think about what you want to do, and just judge the different options: if A seems better than B, then you do that; if B seems better than A, then you do that. This again lacks an attachment to outcome: you don't intrinsically prefer A.
So we have both craving-based motivations, and non-craving-based motivations. Both have their strengths and weaknesses.
Suppose that you are in a bad situation, where even the best possible course of action only has a 10% chance of getting you through alive. With no strong craving, since that’s the best you can do, you focus on doing the things that will just get you that 10%.
But many of us have a strong craving to live. The craving injects the belief/constraint "I will not die"; this contradicts the "90% chance of dying", and forces the system to try to come up with a scenario which fulfills the constraint of you not dying.
If it is true that a 10% chance of survival really is the best that you can do, then you should clearly just focus on doing the best you can to get the probability even that high. The craving causing trouble by thrashing around is only going to make things worse.
On the other hand, maybe this estimate is flawed and you could get a higher probability of survival by doing something else. In that case, the craving absolutely refusing to go on until you have figured out something better, might be the right action.
If the systems only had different tradeoffs in this sense, it might be best to keep both of them around. But actually, craving does not _really_ care about outcomes; it cares about mental content.
Notice that in the preceding, the craving injects the constraint of "I will not die". This isn't actually a constraint that you could fulfill - you always have some risk of death. What you _can_ do, is avoid the _thought_ of death.
So when the craving injects the belief of "I will not die", this is actually the constraint of "the chosen hypothesis does not make the thought of death seem particularly plausible".
There's a correlation that makes these work: if some action strongly brings death to mind, then it's often because that death *is* likely to cause death, so you may try avoiding it.
But the strategy may also cause you to do something that is less associated with the thought of death, even while making death more likely... and the craving trying to avoid the thought of death, incentivizes the search process not to think about this too much.
Letting go of craving often feels like letting go of the goal. If I'm poor and crave money, I may think "I don't want to stop craving money, I do need the money". But I may be able to consider more alternatives if I *don't* require each scenario to lead to riches right away.
Even without craving money, I still have the motivation which just searches for best actions and implements them, so I might get money anyway. But craving rewrites my reality to make me think that I _will_ get the money; so dropping the craving feels like losing the money.
There are all kinds of reasons for why being too craving-driven could be a bad thing; craving operates by actively distorting your sense of reality and search processes. And you can get paralyzed by conflicting cravings and mutually impossible sets of constraints.
In an extreme case, with no counterbalance, you could have such a strong craving that it would rewrite your reality so as to make you believe you had already fulfilled it, and then you didn't do anything else.
Some people who are really emotionally desperate for something, do seem to do something like this: doing things that seem obviously stupid to everyone else, driven by the craving that this _will_ work because it _must_ work, or because some facts seem too painful to accept.
So it seems sensible for the brain to have some counterbalancing force. Which, many tweets out, finally gets us to the question of just what suffering is.
My model is something like: suffering is a negative training signal for the brain. Whenever craving overwrites another subsystem's model of reality, it creates some amount of error signal. Let's call that error signal "unsatisfactoriness". Note that not all of it is conscious!
Sometimes you may feel like you have had suffering you weren't conscious of: something within you relaxes, you think you feel better, and realize you were suffering before. Some subsystem has noted the change in the error signal, and generated the judgment "this feels better".
(This raises the question of whether you were really suffering before, as in experiencing it consciously. I don't know. Maybe there isn't even a fact of the matter: maybe it's just post-hoc narrative. But that's getting into no-self, and I was supposed to stick to suffering...)
In any case, conscious suffering is kind of hard to describe directly, but it tends to involve some desire to not have it, the mental judgment that you are suffering, and afterwards, wanting to avoid states that caused a lot of suffering.
Those behaviors could be implemented by specific subsystems detecting the presence of the error signal, and causing those kinds of judgments and action tendencies.
(I'm going by the model that e.g. suffering qualia feel ineffable because the system has no access to their internal structure, but has information about _some_ of their associated properties; as in e.g. openphilanthropy.org/sites/default/… and openphilanthropy.org/software-agent… )
So craving = priority override that pushes us to pursue/avoid specific mental experiences by selectively rewriting reality; suffering = a counterbalancing error signal caused by craving, teaching the brain to try to avoid craving.
Of course you might now notice that there's a bit of an issue here, in that craving tries to force actions which will bring us happiness and avoid suffering... while it is actually causing suffering itself. (You may have heard this before, it's called the Four Noble Truths.)
Well, actually it's the first three noble truths; the fourth one is that there's an end to suffering. What's up with that?
Remember when I said that cravings are *hypotheses* that are trying to prove themselves correct? The way that action and belief get intermingled in PP makes this a bit hard to express in terms of belief, but it's something like "craving this outcome will bring happiness".
Now there is some truth to this! Craving _does_ push us towards beneficial states and to avoid negative states, so it _does_ often motivate us to act in ways that could bring happiness. But it's only a partial truth: less attached motivation could bring those states as well.
_Surfing Uncertainty_ says that hypotheses will _selectively_ sample reality so as to confirm themselves, and then continue living. The same is true for craving.
Say you crave ice cream; this causes you to get ice cream; it tastes good. You might feel better because the craving (and associated suffering) ceased when you got the ice cream - craving counts this as evidence for its hypothesis that "craving ice cream is good".
Which is kind of true... the craving ceased when you got the ice cream, so the associated suffering ceased, so you felt better. And without the craving, you might not have gotten the ice cream. But there's a rather obvious "but" here...
(I want to again emphasize that you are allowed to want things; the fallacy is not in considering ice cream good, it's in thinking you *must* have ice cream. Any choice involves both craving and non-craving motivation; wanting the ice cream because it tastes good is valid.)
I should note at this point that I have been talking like eliminating craving would be a pure win. Probably not necessarily!
Craving pushes you to avoid states that feel unpleasant. Now unpleasant states are often bad, but the correlation is only partial; if avoiding unpleasantness is your main criteria, you fall victim to Goodhart's Law, sacrificing the thing that unpleasantness was a proxy for.
But ignoring the proxy entirely is a failure mode too! the need to avoid displeasure normally operates as an automatic feedback mechanism. It is possible to weaken this mechanism, without being smart about it and doing nothing to develop alternative mechanisms.
As you have less craving to reduce your own craving, you may become less motivated to work on your emotional issues and remaining issues. If you become irrationally depressed, you may just go “oh, depression, why not”.
Or if you get irrationally angry at someone else, you may just be okay with it, whereas previously a strong craving to avoid hurting others would have prevented you. I said that motivations other than craving *may* lead to the same outcomes as craving; not that they *must*.
This is likely the reason why many traditions also emphasize notions such as training in morality in addition to practicing meditation. Just reducing craving will not magically make all of your beliefs perfect or make you completely moral, as the conduct of many teachers shows.
Anyway, I think the brain has a "master craving hypothesis" (PP might call this a hyperprior) that "craving things is needed for happiness"; this then acts as a template for more localized craving, e.g. "getting this ice cream is needed for me to feel happy right now".
I see meditative practices as basically helping you more clearly see how craving works: if you can see what a particular craving is doing and how it is actually causing suffering, the act of seeing it will prove its hypothesis false, and the brain deletes that particular craving.
It's claimed that if you act motivated by craving, then you will end up having more craving. This is the opposite of seeing things clearly: if you act motivated by a craving hypothesis, it will sample data so as to prove itself, and also strengthen the master hypothesis.
On the other hand, if you are successful with the practices that help you see what craving is actually doing, then you will act more out of other forms of motivation. And this will weaken the master hypothesis of craving being necessary for happiness.
But things are complicated by the fact that even if you delete individual cravings, the master craving hypothesis remains - and vice versa. And the master craving hypothesis sits really, really deep.
Say you eliminate some craving, and get into a state with no craving. That feels really good! Your brain (correctly) identifies it as a state you want to be in - and then the master craving hypothesis spawns a craving to get into that state again, which makes it actively harder..
And you might end up feeling that you were just tricking yourself with all the "letting go of craving" stuff because it worked for a while but now it doesn't work anymore and gah why can't I get back to that nice non-craving state, I need it for my happiness.
Also if you *do* manage to weaken the master craving hypothesis in a way which prevents it from spawning new cravings in specific circumstances, your brain is full of craving hypotheses generated over a lifetime of the master hypothesis being strong. Going to take some work...
Another thing is that, as mentioned earlier, I suspect that craving acts in part, as an evolved "emergency override" in situations where you seem to be in danger. This means that craving hypotheses are more likely to win over alternative ones in dangerous-seeming situations.
... but what does it mean for "you" to be in danger? There's something wonky going on with that, given that we established that craving reacts to unpleasant mental states rather than objective outcomes - how coherent is the "you" that it is trying to protect?
That's a rhetorical question, of course. The master craving hypothesis seems closely linked to a different hypothesis about the nature of the self, which can also be investigated and revised, affecting the master craving hypothesis in turn.
But that would get us into no-self and I'm going to finally stop here (phew). This was my stream-of-consciousness talk about predictive processing and suffering, good night and thank you for listening!
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Kaj Sotala

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!