An ironic back story (h/t @Edit0r_At_Large for evoking the memory).

This paper was itself submitted as a Registered Report in 2018 but was rejected!

Reviews were excellent. Identified some limitations we could solve, others we would have needed to debate on feasibility grounds.
@Edit0r_At_Large The journal did invite a resubmission if we wanted to try to address them. However, we ultimately decided not to resubmit because of timing. We had a grant deadline to consider.

We did incorporate reviewer suggestions that we could into the final design and proceeded.
We eventually had the full report and that was peer reviewed in the normal process.

We published the paper in Nature Human Behaviour.

The RR was originally submitted to Nature Human Behaviour.

I think the RR submission did meaningfully improve our design & odds of success.
Asked in another thread about the basis of rejection.
The feasibility of the stimulus set resolved well when we implemented the plan. We barely needed the back-up plans if the main strategy didn't work.

Reviewer sampling, we had a good strategy for broad outreach and expertise matching. The sample ended up quite positive about RRs.
In other survey data, we have some evidence that there is very widespread positivity about the format in these fields, so not necessarily a sampling bias based on the population.

We did more work on the power analysis based on the reviews.
And, we retained the mostly question-based, exploratory approach leaving much of the theoretical positioning for various outcome criteria in the supplemental material.

Incidentally, I think the RR format is for planned research, inclusive of confirmatory and exploratory methods.
Idea based on this experience: A tough test for RRs would be to compare standard articles that were rejected RRs with standard articles. Both are published in the conventional format, but the rejected RRs might have been improved by the review process, as ours was IMO.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Brian Nosek

Brian Nosek Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @BrianNosek

24 Jun
New in Nature Human Behavior: We had 353 peer reviewers evaluate published Registered Reports versus comparison articles on 19 outcome criteria. We found that RRs were consistently rated higher on rigor and quality.

Paper nature.com/articles/s4156…

Green OA osf.io/preprints/meta…
Figure shows performance of RRs versus comparison articles on 19 criteria and 95% credible intervals. Red criteria evaluated before knowing the results, blue after knowing the results, green summarizing whole paper. Image
Congrats to @cksoderberg Tim Errington @SchiavoneSays @julia_gb @FSingletonThorn @siminevazire and Kevin Esterling for the excellent work on this project to provide an additional evidence base for how Registered Reports can alter the credibility of published research.
Read 6 tweets
9 Feb
10 years of replication and reform in psychology. What has been done and learned?

Our latest paper prepared for the Annual Review summarizes the advances in conducting and understanding replication and the reform movement that has spawned around it.

psyarxiv.com/ksfvq/

1/
We open w/ anecdote of the 2014 special issue of Social Psychology. The event encapsulated themes that played out over the decade. The issue brought attention to replications, Registered Reports, & spawned “repligate”

econtent.hogrefe.com/toc/zsp/45/3

Figure from royalsocietypublishing.org/doi/full/10.10…
Read 15 tweets
10 Sep 20
Our prospective replication study released!

5 years: 16 novel discoveries get round-robin replication.

Preregistration, large samples, transparency of materials.

Replication effect sizes 97% the size of confirmatory tests!

psyarxiv.com/n2a9x

Lead: @JProtzko 1/
When teams made a new discovery, they submitted it to a prereg’d confirmatory test (orange).

Confirmatory tests subjected to 4 replications (Ns ~ 1500 each)

Original team wrote full methods section. Team conducted independent replications (green) and a self-replication (blue).
Based on confirmatory effect sizes and replication sample sizes, we’d expect 80% successful replications (p<.05). We observed 86%.

Exceeding possible replication rate based on power surely due to chance. But, outcome clearly indicates that high replicability is achievable
Read 10 tweets
18 Feb 20
My Jewish spouse loves Christmas. Okay, fine, who doesn’t?

Converting to a “Valentine’s tree” seemed a bit excessive.

Came home from a trip tonight. Dear god, how do I make it stop? Image
Came downstairs and the tree is transformed with whoopee cushions, fake(?) poop, and cockroaches all-over it.

I am stuck in the house with an April Fools Tree and the family members responsible for it. Image
When my spouse commits, she commits.

Me? I’m starting to have my doubts. Image
Read 26 tweets
25 May 19
Happy to elaborate. Think of preregistration of analysis plans as hypothesizing, data analysis, and scenario planning all rolled into one and without knowing what the data are. This creates a novel decision-making situation. 1/
For example, the first time preregistering an analysis plan, many people report being shocked at how hard it is without seeing the data. It produces a recognition that our analysis decision-making (and hypothesizing) had been much more data contingent than we realized. 2/
Without the data, there is a lot of new mental work to articulate precisely what the hypothesis is and how the data could be used to evaluate that hypothesis. My odd experience was believing that I had been doing that all along, w/out realizing that I used so much discretion. 3/
Read 12 tweets
8 Jan 19
Some predictions about whether the researcher's ideology effects their likelihood of replicating a prior result. ht @jayvanbavel

First, I have no doubt that ideology CAN influence replicability. Classic Rosenthal work + more provides good basis.

So, under what conditions?
1. Ideology may guide selection of studies to replicate. More likely to pursue implausible X because it disagrees with my priors; and pursue plausible Y because it agrees with my priors.

On balance, this may be a benefit of ideology to help with self-correction and bolstering.
2. Ideology may shape design of studies. More likely to select design conditions to fail if I don't like the idea; more likely to select design to succeed if I like the idea.

This is a problem because of tendency for overgeneralization of limited conditions to phenomenon. But,
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(