New in Nature Human Behavior: We had 353 peer reviewers evaluate published Registered Reports versus comparison articles on 19 outcome criteria. We found that RRs were consistently rated higher on rigor and quality.
Figure shows performance of RRs versus comparison articles on 19 criteria and 95% credible intervals. Red criteria evaluated before knowing the results, blue after knowing the results, green summarizing whole paper.
Congrats to @cksoderberg Tim Errington @SchiavoneSays@julia_gb@FSingletonThorn@siminevazire and Kevin Esterling for the excellent work on this project to provide an additional evidence base for how Registered Reports can alter the credibility of published research.
The next step is to get funding for a randomized trial of Registered Reports in a naturalistic setting. Here's our proposal partnering with many journals to insert it during the revise-and-resubmit process when authors are asked to do another experiment. osf.io/preprints/meta…
@Edit0r_At_Large The journal did invite a resubmission if we wanted to try to address them. However, we ultimately decided not to resubmit because of timing. We had a grant deadline to consider.
We did incorporate reviewer suggestions that we could into the final design and proceeded.
We eventually had the full report and that was peer reviewed in the normal process.
We published the paper in Nature Human Behaviour.
The RR was originally submitted to Nature Human Behaviour.
I think the RR submission did meaningfully improve our design & odds of success.
10 years of replication and reform in psychology. What has been done and learned?
Our latest paper prepared for the Annual Review summarizes the advances in conducting and understanding replication and the reform movement that has spawned around it.
We open w/ anecdote of the 2014 special issue of Social Psychology. The event encapsulated themes that played out over the decade. The issue brought attention to replications, Registered Reports, & spawned “repligate”
Happy to elaborate. Think of preregistration of analysis plans as hypothesizing, data analysis, and scenario planning all rolled into one and without knowing what the data are. This creates a novel decision-making situation. 1/
For example, the first time preregistering an analysis plan, many people report being shocked at how hard it is without seeing the data. It produces a recognition that our analysis decision-making (and hypothesizing) had been much more data contingent than we realized. 2/
Without the data, there is a lot of new mental work to articulate precisely what the hypothesis is and how the data could be used to evaluate that hypothesis. My odd experience was believing that I had been doing that all along, w/out realizing that I used so much discretion. 3/
Some predictions about whether the researcher's ideology effects their likelihood of replicating a prior result. ht @jayvanbavel
First, I have no doubt that ideology CAN influence replicability. Classic Rosenthal work + more provides good basis.
So, under what conditions?
1. Ideology may guide selection of studies to replicate. More likely to pursue implausible X because it disagrees with my priors; and pursue plausible Y because it agrees with my priors.
On balance, this may be a benefit of ideology to help with self-correction and bolstering.
2. Ideology may shape design of studies. More likely to select design conditions to fail if I don't like the idea; more likely to select design to succeed if I like the idea.
This is a problem because of tendency for overgeneralization of limited conditions to phenomenon. But,