, 15 tweets, 5 min read Read on Twitter
Does the number of boxes loaded _cause_ the risk of a truck rolling over?

In what ways does confounding (or having access only to certain macro-variables) limit causal inference in neuroimaging?

How are these two questions related?

Check out the thread below 👇

1/
Bear with me, in an attempt to strip down the problem and to provide a starting point for a constructive discourse I am deliberately not using neuro lingo to begin with.
I hope the following idealised simplified toy example turns out to be instructive.

2/
Let's pretend we are gatekeepers at a dispatch warehouse. Our task is to decide which trucks are good to go and safe to hit the road.
For example, we do not let pass empty trucks (the weigh bridge shows the truck's tare weight) to avoid unnecessary empty drives.

3/
Last months a high number of trucks leaving the warehouse rolled over in an accident.
As a gatekeeper, our goal is to reduce that number in the future by only letting pass trucks that satisfy certain security measures and send them back and have them reloaded otherwise.

4/
For this example's sake, assume that all boxes loaded onto trucks are of the same weight and that—unknown to us!—all that matters for the risk of a truck rolling over is how balanced it is, i.e. whether the same number of packages is loaded onto the right and the left side.

5/
To analyse the problem, the personnel is instructed to randomly load trucks for a months.
We now consider two scenarios in which we as gatekeepers have access to different observables; both times aiming to devise an implementable rule which trucks to not let pass from now on.

6/
1) Before we let pass a truck, we ask the driver for L and R, the number of packages loaded onto the left or right side of the truck, respectively. We also obtain Y indicating whether later a truck rolled over in an accident en route.
For each truck we observe [L,R,Y].

7/
In principle, we will be able to find that [L,R] _causes_ Y.
If we imposed that any truck wanting to proceed through the gate with L ≠ R be sent back & the boxes be re-shuffled, then this intervention on [L,R] would ensure L = R and reduce the chances of roll over accidents.

8/
2) Assume instead, that we cannot (or do not) observe this underlying micro-level, i.e. we do not ask the truck driver for L and R, but instead we only determine the total number of loaded boxes B by differential weighing at the gate.
For each truck we observe [B,Y].

9/
B = L + R is a macro-level observable; it's a" projection" of the underlying micro-level [L,R].
The information about the boxes' locations is being lost, i.e. unobserved. (One could also consider projections such as observing only either L or R or transformations like L^R…)

10/
Question: Does B _cause_ Y?
B and Y are independent in our toy example setup.
How can we as gatekeepers reduce the chances of roll over events? If we encounter a truck at the gate with B = 42, should we send it back for reloading and 2 boxes being removed? Or 14 being added?

11/
Given B & Y are independent (and under faithfulness), we are left to conclude: B does not cause Y.
It may seem counter-intuitive given that B = L + R and that [L,R] indeed causes Y.
Yet, intervening on B by only letting trucks with certain value of B pass, does not effect Y!

12/
[L,R] causes Y; B doesn't.
[L,R] & Y are dependent; B & Y aren't.

⇒ Finding a projection/transformation/macro-variable to be independent of Y does not imply that its micro constituents are independent of Y.

[G,H,…] can be dependent on/causing Y,
while f([G,H,…]) is not!

13/
Now, how is that linked to neuroimaging, unobservables, and confounding?
In imaging we measure low-d projections (or transformations in the terminology of auai.org/uai2017/procee…) of a high-d underlying neural level.
Surely, information is lost and not all signal captured!

14/
Assume we find
A: "average neural firing rate in region X" ("number of boxes")
is independent of
B: "arm movement" ("truck rolled over").

Can we conclude that
1) A doesn't cause B? /y
2) "firings of neurons in region X" ("number of boxes left/right") don't cause B? /n

Thoughts?
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to sweichwald
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!