Background: Research in psych, behav. & neuroecon searches individual differences & correlates of rationality. Indices of revealed preference consistency (GARP) are used as ad hoc measurement for the supposedly latent concept of rationality, often interpreted as psych. construct.
The identification of robust correlates requires precise measures (c Hedge et al.). The goal of this project was hence to probe the reliability of individual rationality measurements. link.springer.com/article/10.375…
Methods: Our empirical analyses draw from multiple original and published datasets that vary in the deployed choice domain, choice complexity, study context (lab, online), incentivization structure, study population, sample size, task structure, measurement length.
Overall, we evaluated reliability (ICC) of rationality in 8 datasets with, in total, over 1600 participants, including a prereg. replication. We owe thanks to all authors making their original data available. #openscience#datasharing osf.io/kd4hw/ via @OSFramework
Results: We found that across datasets the inter-method, test-retest, and split-half reliability was moderate to poor according to common standards (all ICC estimates < 0.75; 95% CIs exclude benchmark of 0.75). doi.org/10.1016/j.jcm.…
Further, by allowing participants to revise choices (method by Breig & @p_feldman) + analysis of the within-subject variance, we provide evidence that this result was not driven by large measurement error or random noise but by low inter-individual diff. papers.ssrn.com/sol3/papers.cf…
In the supplemental results we show with a back of the envelope simulation that taking individual measurements of CCEI and HMI yielded approximately two times worse predictive accuracy for another measurement within the same individual than simply assuming the population mean.
Conclusions: We demonstrate that the reliability of individual rationality measurements cannot be assumed until shown otherwise. This has direct implications for published, correlational research including our own doi.org/10.1016/j.psyn… *sigh* @ManuelaSellitto@kalenscher
Personal view: Negative results can be frustrating. This work arose from our own experiences during my dissertation work (we are heavily invested in #GARP). But, I am very happy that editors like @TobiasUHauser see value in publishing this type of work.
Credits (1/2): methodology, formal analysis, data curation, writing - original draft, visualization, project administration by F.J.N.; conceptualization by F.J.N., L.M.L., and T.K.;
Credits (2/2): software by F.J.N., L.M.L., and N.L.; supervision by F.J.N. and T.K.; investigation by F.J.N. and L.M.L.; writing - review and editing by F.J.N., L.M.L., N.L., and T.K.; and funding acquisition by T.K.
PS: Thanks @yueyuehu2 for discussion and moral support. Thanks to Paul Kramer, Paula Klug, and Hannah Wahle for superb research assistance.
This was a ride! My first registered report (stage 2) /w @kalenscher was just published at Royal Society Open Science 🎉🎉 "Influence of memory processes on choice-consistency". Here's a quick tweeprint...
We aimed to test the influence of memory retrieval of exemplars on choice-consistency in a novel visual choice paradigm. Participants had to select one out of a set of five cubes that has the subjectively most similar orientation along 2 dimensions to the exemplar.
Drawing an analogy between theories in visual and value-based choice, we hypothesized that choice consistency would be related to exemplar (goal) representation strength and decrease over retention time. The second picture shows a data simulation for this hypothesis.