In the debate on "social” priming in Psych Inq, it was maintained in the target article (by Sherman& @amrivers1 ) and several commentaries (e.g. @missyjferguson & @JeremyCone2; @dalbarra &Wenhao Dai ) that we must investigate “operating characteristics” and “moderators”.🧵1/20
The problem of moderators we keep facing here as well as in many other literatures signals in fact a deeper and prevalent problem to which many authors have for decades drawn our attention: Except a few instances, psychologists almost never test their auxiliary hypotheses. 2/20
Auxiliary hypotheses are those we, out of theoretical and methodological necessities, assume to be true as we test our main hypothesis . To exemplify, the target article mentions the auxiliary hypotheses > 3/20
that the length of delay between prime and target does(/not) influence the effect of priming, dichotomous, multiple choice and continuous measures of behavior as DVs are(/not) equivalent, or that a within subject design is(/not) equivalent to a between subject design > 4/20
(For instance Fritz Strack & @aanobs indicate that within subject design has the risk of providing cues for the concerning the purpose of the study: tandfonline.com/doi/pdf/10.108…). 5/20
When these and other auxiliary hypotheses aren't sufficiently tested beforehand to illuminate how they will influence the results, we can hardly interpret what the results imply for the main hypothesis, be they (apparently) consistent or inconsistent with the main hypothesis.6/20
Because in both cases the results can be blamed on auxiliary hypotheses. Thus, falsification or corroboration of the main hypothesis becomes nearly impossible. This problem is in fact a familiar one in phil sci; namely, empirical underdetermination. 7/20
The problem of underdetermination proves to be particularly difficult in the social sciences, where auxiliary hypotheses are harder to test independently, and paradigmatic theories don’t organize the fields in the form of strong nomological networks. 8/20
The social sciences isn’t oblivious to this problem either. Meehl emphasized its importance repeatedly and in strong terms (doi.org/10.2466/pr0.19…) If he lived a little longer, he would also work on his big plan to catalogue all important auxiliary hypotheses in psychology. 9/20
More recently several other authors made important remarks on how the common practices of hypothesis testing in psychology need to be reformed with emphasis on the role of auxiliaries: 10/20
@briandavidearp & Trafimow (2015) and Trafimow (2012) address the ambiguity of failed replications and falsifications of theories due to inadequate attention to auxiliary assumptions in designing replications and theory tests 11/20 frontiersin.org/articles/10.33…& doi.org/10.1177/095935…
@annemscheel et al (2020) explicate how premature testing of hypotheses lead to the accumulation of uninformative results 12/20 pubmed.ncbi.nlm.nih.gov/33326363/
@gershbrain (2019) addresses how ad hoc auxiliary hypotheses protect beliefs from disconfirmation 13/20 link.springer.com/article/10.375…
What has changed after all these calls to place more emphasis on the role of auxiliary hypotheses? Nothing much. There is still little awareness of the need for explicitly specifying auxiliary hypotheses in designing strong tests 14/20 frontiersin.org/articles/10.33…
Since nothing has changed, we have to have the same debates over and over, just like the debate on “moderators” in the “social” priming literature. Psychology can and should do better. 15/20
Not to sound alarming but psychology might long have been facing an auxiliary hypotheses crisis. The first thing to do is to take the problem seriously, and to understand that it is not solved by saying “it would also be good to think about the potential moderators”. 16/20
Psychology’s chronic problem of underdetermination won’t be solved unless auxiliary hypotheses are examined in an organized and systematic manner. Otherwise there will be special issues on the same research questions every 7 years. This is indicative of huge research waste. 17/20
With half of the effort and resources that go into creating this research waste, psychology can build much more reliable and credible literatures. As a way to tackle this problem, I and @tunc_necip humbly suggest the systematic replication framework (SRF): psyarxiv.com/pdm7y/
SRF is a (sophisticated falsificationist) method to reduce the ambiguity of the results of hypothesis tests by eliminating alternative explanations findings due to auxiliary hypotheses. Main points and summary can be found in this thread: 19/20
SRF or alternative systematic approaches might at first sound too costly and impractical. But then just think about the costs and feasibility of the current methodological status quo in terms of the research waste it keeps on creating. 20/20

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with ☠️ Duygu Uygun-Tunc ☠️

☠️ Duygu Uygun-Tunc ☠️ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @uygun_tunc

24 Dec 20
Last paper to read before the holiday! Me and @tunc_necip would like to announce that we’ve uploaded the latest, thoroughly revised version of our manuscript. Here is a 🧵 summarizing the main points and major additions. 10.31234/osf.io/pdm7y 👇
1. Auxiliary hypotheses are indispensable, bc they allow to derive testable predictions from theoretical claims. These range from ceteris paribus clauses to assumptions abt the research design and instruments, accuracy of the measurements, validity of the operationalizations etc.
2. But when interpreting test results, especially non-corroborative ones, it can be extremely difficult to disentangle their implications for the auxiliaries and the main hypothesis. This is called in PoS the problem of underdetermination. For more, see
Read 22 tweets
25 Oct 20
A defense of evo psych on the grounds that there is no logic to scientific inquiry, rather scientists do and "should" engage in "inference to the best explanation": arcdigital.media/critics-of-evo…
I will compose a thread soon, now I can just point out a few points I will argue for: 1. If inference to the best explanation characterizes science (and should), then there is hardly a rational criterion by which we can differentiate lay reasoning from scientific inquiry. 1/n
2. There is a big difference between inferring "before the fact" and "after the fact": The former pertains to testing, the latter to interpretation. All organisms interpret, but not all engage in testing. 2/n
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!