I just found out a paper we first submitted ~3 years ago was accepted! We used an N > 1,000 sample, open data/code, and robust methods
I'm proud of this paper, and it also helped radicalize me against a lot of the stories we tell ourselves about peer review
A 🧵
The many reviews we received were almost uniformly hostile, confused, non-constructive, or some combination
The paper definitely got better throughout the process, and that had ~0 to do with the reviews
Real reason #1: A wonderful, ongoing collaboration with a stellar biostatistician/many other great collaborators
Real reason #2: I got better at coding/new tools became available
Shout out to @wdonald_1985 in particular for the BGGM package so I could immunize the paper against the tiresome debates over whether networks replicate or not
The reviewers never suggested any of the methods we added to the paper over time nor did they comment on the code at any point
I know commenting on code isn't a norm in psych, but also yikes at that norm
Twitter can be a terrible place to be for too many academics, and the helpful side of academic Twitter was ~infinitely more helpful in making this paper better than any review we received
If your impulse is to tell yourself this is an isolated incident or the casualty of an academic turf war I don't blame you!
I'd also recommend Science Fictions by @StuartJRitchie where he goes into the deep-rooted, systemic problems with peer review
And if the story we need to tell ourselves is the reviewers made this paper indirectly better by rejecting it, fine I guess?
Though by that logic every paper should be rejected 4-5 times for inconsistent reasons in the hope the authors will become better scientists over time
I also have a privilege stack that goes halfway to the moon
My overall "not great" experiences with peer review have affected me materially way less than they would have if I held any minoritized identity
I think there's a disconnect for a lot of capable, well-meaning people re: peer review
We can acknowledge it's wildly inconsistent, and when we study it the results are disappointing
But if we've "survived" as academics our personal experiences are ultimately positive enough
We ask clinicians to go beyond their personal experiences of "what works" to adopt evidence-based treatments
We ask well-published researchers who've "won" the peer review gauntlet to go beyond their personal experiences of "what works" to consider other systems of evaluation
When push comes to shove, it's extremely difficult to not prioritize what we've experienced ourselves over what is abstractly happening in general
And too easy to dismiss others' bad experiences as isolated events/something wrong with that paper/something wrong with the author
And certainly my personal experiences could/are biasing me against peer review as practiced!
But I also have receipts
Like low interrater reliability (i.e., can people even agree a paper is good or not, nope!) across 100,000 published papers
One place I do think peer review helps with minimally is increasing the likelihood of the paper meeting reporting standards, though any effects that exist there don't seem to be very large
I also think (thanks to @NC_Jacobson) that peer review likely decreases the gap between methods and causal claims a little, though unfortunately that gap is often still huge
I don't think the trade-offs of "sure, no one can agree whether a paper is good and lots of reviews are poor quality, but at least we can minimally increase adherence to reporting guidelines/make people overclaim slightly less!" justify the system as implemented
Still, I agree with @siminevazire here. I'm asking for a rethinking of what peer review should be, not advocating a complete absence of it
This thread is already too long, and if you're understandably wondering "where are the solutions?" I only have options, no guarantees
Open peer review, where reviews are published alongside the published articles (not necessarily with the reviewers' names attached) = a start
Another more radical idea is for all papers to be posted as preprints and have either no or minimal overlay journals, where peer review happens out in the open and people can see how papers do/don't change over time
There are other worthy ideas too, and none of them are perfect either! But if we start to accept that the status quo is unacceptable, we have to at least explore other imperfect options that might be improvements
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Trying to balance:
- Having genuine empathy for people who are staring down the barrel of their life's work not replicating
- Not reinforcing power structures and practices that led to a world where those barrels are all too common
Hearing @minzlicht talk about this on the "Replication Crisis Gets Personal" @fourbeerspod episode brought home to me how lucky I am to be early in my career now as opposed to 20 or even 10 years ago
But his example* reminds me people in power have a choice when confronted with a much messier literature than initially described
They can double down, or they can engage meaningfully with a more complicated world
*And many others, my mentions aren't ever comprehensive!
New preprint from @JSchleiderPhD & me: Emotion and anxiety mindsets share little unique variance with internalizing problems in adults once you account for hopelessness (Ns = 200, 430)
Open code & data + interpretations in this thread!
Two other researchers and I just went from a partial draft of a Methods section to a full draft of Introduction, Methods, and Results in less than a day