Profile picture
Valentin Wyart @valentinwyart
, 15 tweets, 5 min read Read on Twitter
All models are wrong, sure, but how wrong? Check out my BBS commentary (goo.gl/TBf1H7) to the target piece by @DobyRahnev and @rndenison (goo.gl/mZPYRh). I argue that the self-consistency of modeled decisions should be considered a relevant metric. 1/15
Relative model comparison is most often used to identify a better model across a limited set of candidates. But how good, in an absolute sense, are the models being tested? The upper bound on model evidence is usually unknown, making this burning question hard to answer. 2/15
The ‘bias-variance trade-off’ of an estimator in statistics offers a useful metaphor for the problem above. Errors of an estimator, like an altimeter on a plane, can be described as a mix of systematic deviations from target (bias) and imprecise measurements (variance). 3/15
In cognitive modeling, the bias of a model corresponds to systematic/predictable deviations of modeled decisions from observed ones. These deviations are due to differences btw the modeled computations and the ones underlying observed decisions (a.k.a. model ‘misfit’). 4/15
By contrast, the variance of a model reflects unpredictable, random variability in the observed decisions. This variability arises from the internal ‘noise’ postulated by decision theories, from signal detection theory to sampling-based accounts (goo.gl/uhrq6h). 5/15
The self-consistency of modeled decisions across identical repetitions of the same input offers a simple behavioral metric that co-varies negatively with the bias-variance trade-off. How? Why negatively? Through two well-known mechanisms: 6/15
1. increasing internal noise in a model decreases its self-consistency (‘double-pass’ procedure in psychophysics). 2. fitting a biased/wrong model exaggerates internal noise (goo.gl/gLJCBY), hence decreasing the self-consistency of modeled decisions (see point 1). 7/15
It is thus possible to obtain an empirical estimate of the bias-variance trade-off of a model by comparing the consistency of its modeled decisions to that of observed decisions. Note that decision *consistency* is very different from decision *accuracy*. 8/15
A model with lower consistency than that of observed decisions is biased, meaning that its computations deviate predictably from the ones underlying observed decisions. The difference in consistency btw modeled and observed decisions quantifies how large the bias term is. 9/15
The most interesting feat of this approach is that one can quantify the total amount of biases without knowing their exact form. For example, a decision leak and a miscalibrated decision criterion will both show up as bias in a model lacking these two idiosyncratic effects. 10/15
We have applied this approach to human probabilistic reasoning (goo.gl/hqtUf3) and reward-guided learning (goo.gl/BtNAXR), both fitted using popular models, and showed that deviations amount in the two cases to 1/3 of bias and 2/3 of variance. 11/15
This is important, because the variance term provides an upper bound on the predictability of observed decisions in the studied task. The fact that the variance term is large suggests that human decisions are intrinsically variable in these two widely used tasks. 12/15
Of course, the success of the approach depends on the ‘richness’ of the input. I mean, a binary stimulus space with only two exemplars (e.g., signal vs. noise) will not afford to identify biases other than a miscalibrated decision criterion. 13/15
Words of caution: 1. sequential decision biases spill into the variance term, and thus have to be modeled explicitly (goo.gl/hqtUf3); 2. sequential learning tasks have to verify additional constraints for the approach to be valid (goo.gl/BtNAXR). 14/15
So not a silver bullet, by all accounts, but hopefully a step in the right direction. To learn more, read my BBS commentary (goo.gl/TBf1H7) and the examples cited above. And congrats again to @DobyRahnev and @rndenison for their insightful target article! 15/15
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Valentin Wyart
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!