Last paper to read before the holiday! Me and @tunc_necip would like to announce that we’ve uploaded the latest, thoroughly revised version of our manuscript. Here is a 🧵 summarizing the main points and major additions. 10.31234/osf.io/pdm7y 👇
1. Auxiliary hypotheses are indispensable, bc they allow to derive testable predictions from theoretical claims. These range from ceteris paribus clauses to assumptions abt the research design and instruments, accuracy of the measurements, validity of the operationalizations etc.
2. But when interpreting test results, especially non-corroborative ones, it can be extremely difficult to disentangle their implications for the auxiliaries and the main hypothesis. This is called in PoS the problem of underdetermination. For more, see
3.Due to this ambiguity, auxiliaries may also be used to deflect falsification by providing “alternative explanations” of findings. This problem isn't fatal to the extent that auxiliaries can be independently validated and safely relegated to “unproblematic background knowledge.”
4. Unfortunately this is usually an unrealistic expectation in the so-called “softer” sciences, where theories tend to be loosely organized, measurements noisy and constructs vague. This makes it very difficult to resolve theoretical disputes in these fields.
5. But when a given scientific field lacks consensus regarding established evidence and how exactly it supports or contradicts competing theoretical claims, the scientific community cannot appraise whether there is scientific progress or merely a misleading semblance of it.
6. Lakatos maintained decades ago that most theorizing in social sciences risks making merely pseudo-scientific progress. Meehl’s old observation is still relevant: theoretical claims don’t die at the hands of evidence but are discontinued due to sheer loss of interest.
7. Are social sciences doomed? Nope, but there is huge need for taking the problem of underdetermination seriously and devising adequate methodological solutions.
8. SRF tackles underdetermination by disentangling the implications of the findings for the main hypothesis and the auxiliaries through pre-planned series of logically interlinked close and conceptual replications. For more, see
9. Has nobody thought of systematically organizing replications before? Sure they did. There’s been a range of proposals since the 60s (Sidman, Lykken, Barr et al, Baribault et al, Yarkoni, among others), which all share a common feature: randomization of operationalizations.
10. SRF differs from all these wrt the underlying philosophical ideas and relatedly the concrete methodological objectives. Randomization-based approaches inherit several tenets of classical operationalism.
11. Acc. operationalism a concept consists in nothing but the set of operations used to measure its referent. Thus, the set of operations is NOT an index of a theoretical entity that's conceptually represented in a construct: operations don’t measure anything beyond themselves.
12. Randomization-based approaches are neo-operationalist in that they extend the definitions of concepts to all “possible” operationalizations. They assume that each operationalization introduces some random error, so they prescribe >
13.>randomly selecting a sample of operationalizations from an imagined universe, hoping that the errors due to each would cancel each other out. This, in turn, would reveal the true nature of the link between concepts, freed from the confounding effects of different operations.
14. This neo-operationalism, however, doesn't solve the problems of classical operationalism. One is the circularity of how concepts and measurements are conceived. It has its unique problems too. One is how to define the universe of all possible operationalizations of a concept.
15. We conceive the systematic variation in auxiliary sets not as a random bottom-up process in SRF. The auxiliary sets to be tested should be identified with the aim of examining the most plausible alternative explanations associated with individual auxiliaries.
16. Examining alternative explanations associated with different auxiliary sets would be a useful method for selecting the riskiest falsification test and thus it would potentially provide the strongest corroboration for the main hypothesis.
17. In the case of mixed-evidence SRF increases the transparency of how auxiliaries influence “corroboration” and allows us to evaluate post hoc modifications. This feature can foster progressive theory development by revealing the weak spots of theories.
18. SRF also indicates how progressive research programme is by tracking how the researchers respond to non-corroborative evidence. If the corroboration of a theory is conditional on certain auxiliaries, then these auxiliaries can play a falsification-deflecting role.
19. If the researchers insists on clinging to their theory despite the “evidence” being conditional certain auxiliaries (e.g. researcher flair, hidden moderators), we can consider their research programme degenerating.
20. In this respect, SRF facilitates an objective assessment of Lakatosian progressiveness of a research programme. (and this is a super cool feature to have : ) That’s all folks! Merry Xmas!
In the debate on "social” priming in Psych Inq, it was maintained in the target article (by Sherman& @amrivers1 ) and several commentaries (e.g. @missyjferguson & @JeremyCone2; @dalbarra &Wenhao Dai ) that we must investigate “operating characteristics” and “moderators”.🧵1/20
The problem of moderators we keep facing here as well as in many other literatures signals in fact a deeper and prevalent problem to which many authors have for decades drawn our attention: Except a few instances, psychologists almost never test their auxiliary hypotheses. 2/20
Auxiliary hypotheses are those we, out of theoretical and methodological necessities, assume to be true as we test our main hypothesis
A defense of evo psych on the grounds that there is no logic to scientific inquiry, rather scientists do and "should" engage in "inference to the best explanation": arcdigital.media/critics-of-evo…
I will compose a thread soon, now I can just point out a few points I will argue for: 1. If inference to the best explanation characterizes science (and should), then there is hardly a rational criterion by which we can differentiate lay reasoning from scientific inquiry. 1/n
2. There is a big difference between inferring "before the fact" and "after the fact": The former pertains to testing, the latter to interpretation. All organisms interpret, but not all engage in testing. 2/n