Sharpen your intuitions about plausibility of observed effect sizes.
r > .60?
Is that effect plausibly as large as the relationship between gender and height (.67) or nearness to the equator and temperature (.60)?
r > .50?
Is that effect plausibly as large as the relationship between gender and arm strength (.55) or increasing age and declining speed of information processing in adults (.52)?
r > .40?
Is that effect plausibly as large as the relationship between weight and height (.44), gender and self-reported nuturance (.42), or loss in habitat size and population decline (.40)?
r > .30?
Is that effect plausibly as large as the relationship between elevation and daily temperature (.34), viagra and sexual functioning (.38), past behavior predicting future behavior (.39), or sleeping pills and insomnia reduction (.30)?
r > .20?
Is that effect plausibly as large as the relationship between marital relationship quality and parent-child relationship quality (.22), alcohol and aggressive behavior (.23), or gender and weight (.26)?
r > .10?
Is that effect plausibly as large as the relationship between antihistamine and runny nose (.11), childhood lead exposure and IQ (.12), anti-inflammatories and pain reduction (.14), self-disclosure and likability (.14), or nicotine patch and smoking abstinence (.18)?
r > .00?
Is that effect plausibly as large as the relationship between aspirin use and death by heart attack (.02), calcium and bone mass premenopausal (.08), gender and observed risk taking (.09), or parental divorce and child well-being problems (.09)?
@Edit0r_At_Large The journal did invite a resubmission if we wanted to try to address them. However, we ultimately decided not to resubmit because of timing. We had a grant deadline to consider.
We did incorporate reviewer suggestions that we could into the final design and proceeded.
We eventually had the full report and that was peer reviewed in the normal process.
We published the paper in Nature Human Behaviour.
The RR was originally submitted to Nature Human Behaviour.
I think the RR submission did meaningfully improve our design & odds of success.
New in Nature Human Behavior: We had 353 peer reviewers evaluate published Registered Reports versus comparison articles on 19 outcome criteria. We found that RRs were consistently rated higher on rigor and quality.
Figure shows performance of RRs versus comparison articles on 19 criteria and 95% credible intervals. Red criteria evaluated before knowing the results, blue after knowing the results, green summarizing whole paper.
Congrats to @cksoderberg Tim Errington @SchiavoneSays@julia_gb@FSingletonThorn@siminevazire and Kevin Esterling for the excellent work on this project to provide an additional evidence base for how Registered Reports can alter the credibility of published research.
10 years of replication and reform in psychology. What has been done and learned?
Our latest paper prepared for the Annual Review summarizes the advances in conducting and understanding replication and the reform movement that has spawned around it.
We open w/ anecdote of the 2014 special issue of Social Psychology. The event encapsulated themes that played out over the decade. The issue brought attention to replications, Registered Reports, & spawned “repligate”
Happy to elaborate. Think of preregistration of analysis plans as hypothesizing, data analysis, and scenario planning all rolled into one and without knowing what the data are. This creates a novel decision-making situation. 1/
For example, the first time preregistering an analysis plan, many people report being shocked at how hard it is without seeing the data. It produces a recognition that our analysis decision-making (and hypothesizing) had been much more data contingent than we realized. 2/
Without the data, there is a lot of new mental work to articulate precisely what the hypothesis is and how the data could be used to evaluate that hypothesis. My odd experience was believing that I had been doing that all along, w/out realizing that I used so much discretion. 3/