Let's talk cross-lagged panel models! A short example/provocation, inspired by some discussion yesterday
Say you have 2 things you're interested in, X and Y, each measured at 2 times. You want to know does X cause Y, Y cause X, or both? (or if you're shy about saying "cause" you say "lead to," "predict," "is a risk factor for," "Granger-cause," etc.).
X and Y could be anything, e.g.:
* Depression and stress
* A personality trait and a social role
* Parental something and child something else
* This brain region and that brain region

So you decide to run 2 regressions:
x2 ~ x1 + y1
y2 ~ x1 + y1
Graphically, it looks like this:
(This thread also applies if you only fit one of those regressions, by the way. As in: "Does X predict Y, controlling for prior levels of Y?" You haven't measured or analyzed the rest of it, but that doesn't mean it isn't happening, just that it isn't in your data)
Now let's say your data looks like this. Totally plausible data
If you run the cross-lagged model (the 2 regressions), you'll find a cross-lagged path of about beta = .12 from x1 to y2. It'll be significant with enough subjects. Yay! Time to start writing. "X predicts / leads to / is a risk factor for Y," right?
But wait! That's not the only model you could have run. What other models might make sense? Here's one in words...
"Part of X is stable over time. Same deal with Y. The stuff that makes X stable (individual differences, consistent environment, etc.) could be correlated with the stuff that makes Y stable. The unstable, time-specific influences on X and Y could be correlated as well."
Here's what that looks like in SEM graphical notation. Notice that nowhere is X affecting Y or vice versa. Correlated traits, correlated residuals, that's about it
If you run *this* model, it fits the data perfectly (chisq = 0 with 1 degree of freedom). So, instead of concluding "X affects Y," you could have concluded "X and Y are somewhat-stable, correlated things."

So what's the correct conclusion?
The answer is: Nobody freaking knows. Inferences are always conditional on the model. The data don't tell you which model is right (or maybe better to say which is less wrong, h/t George Box). They're equally good fit. (Oh and btw there are endless other models that also fit)
But if the underlying reality is X and Y are somewhat-stable and correlated things - which sounds pretty reasonable for almost everything in psychology - that alone could be enough to generate this data, which would produce a spurious effect when you fit the cross-lagged model
So what can you do? One, treat those classic two-timepoint cross-lagged models as pretty damn weak evidence of whatever someone's saying they're evidence of
Two, collect more than 2 waves of data. There are better models out there - ones that have both cross-lagged and stable (traitlike) components, and can tease them apart. But you need more timepoints for them to be identified. Lots of good refs, e.g. here: ncbi.nlm.nih.gov/pubmed/25822208
(Three is: do an experiment if you can. If you cannot - because of ethical, practical, or other-kinds-of-validity-matter-too reasons - then know that even the fancypants models have limitations, and know what they are)
None of what I'm saying in this thread is my original contribution or particularly new. But my impression is it's not super well known (b/c people still do cross-lagged analyses), so I'm spreading the word. Also, here's a little R notebook to play with: pastebin.com/HYiZv8s4
P.S. sorry I tweeted a screenshot of the wrong data / correlation matrix, should have been this
Missing some Tweet in this thread?
You can try to force a refresh.

# Like this thread? Get email updates or save it to PDF!

###### Subscribe to Sanjay Srivastava

Get real-time email alerts when new unrolls (>4 tweets) are available from this author!

###### This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

# Try unrolling a thread yourself!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" `@threadreaderapp unroll`