My Authors
Read all threads
Peer review is broken: a thread

Starting with: "Our process is not set up to combat such collusion" and then meandering to academic integrity, ethics, open science, volunteer labor, and whatever else comes up. medium.com/@tnvijayk/pote…
First, this is not the only problem, but peer review can be gamed. Regardless of whether this particular accusation is true, it's clear that this could happen. There are two things at issue here: (1) how the process is set up; and (2) academic integrity.
The process of peer review isn't set up well to combat misuse/collusion/gaming. Blinded peer review is an illusion a huge amount of the time, and conflicts of interest essentially rely on the honor system. And there aren't resources to keep a close eye on it.
Academic research hugely relies on individual integrity, in the research process, in publishing, and in peer review. This begins with the fact that, though we might not want to admit it, in many cases it wouldn't be difficult to just straight-out fake data or an entire study.
The LaCour gay marriage survey scandal is a well-known example of faking data. And a lot of people were scandalized that such a thing could possibly have happened, because PEER REVIEW! How did the reviewers not realize that everything was fake? en.wikipedia.org/wiki/When_cont…
Well, peer reviewers: When you review a paper that uses survey methods do you actually see the data? (Aside: I'll get to open science later in this thread, that's not the point here.) Even if you DID see the data, would you rerun all of the analyses & scrutinize every inch?
In fact, how much effort can you put into that review at all, given that it's essentially unpaid volunteer work? Do you, for example, read all of the papers in their related works to make sure they're citing them properly?
And how do you know that all of the quotes in a paper that used interview methods aren't just made up? Because for ethics/privacy reasons, you can't see the raw data. So it's all just a leap of faith that academics writing papers aren't terrible people.
Same goes for peer review. We assume reviewers:
- aren't colluding with each other to boost their reviews
- aren't asking authors to cite their papers solely to boost their own citations
- aren't tanking papers for self-interested reasons (e.g., not wanting to be scooped)
In sum: Science relies on individual integrity. And I really do think that the majority of the time this is sufficient. The problem is what about when it's not? Are social norms strong enough to enforce ethical behavior in academic publishing? What kinds of procedures would help?
For the research part, movements towards open science, sharing data in paper submissions, etc., can potentially help - though even then, do we expect peer reviewers to replicate studies as part of the review process?
Unfortunately I think that a lot of the problems with academic integrity come down to the incentive structures of academia. Which incentivize publishing as much as possible as fast as possible and with as splashy findings as possible. This makes short cuts very attractive.
Consider the LaCour scandal. If he had actually DONE the survey, it's highly likely the findings would not have been as flashy. Wouldn't have made the news or been discussed on This American Life, and he might not have gotten a job offer at Ivy League School w/o that attention.
And for peer review, there aren't many incentive structures to do a really good job. It's volunteer labor that doesn't get you a job/tenure, and spending too much time on it means time not spent on things that are considered more important for job performance.
I do really like that some conferences/venues (e.g., CHI and CSCW) recognize good reviewers. I would love to see this built out even more! Maybe official awards, something people put on their CV. And also more discussion about why reviewing and academic service are important.
Another issue with peer review is that it's very difficult to root out bad reviewers--not just academic misconduct, but also just BAD reviewers. Reviewers who overstep or write two-sentence reviews or tell you that you should have done a different study or are just plain MEAN.
I know some journals keep track of reviewer data, that AEs can track timeliness, good reviews, etc., and then take that into account when searching for reviewers later. I haven't seen this kind of thing in computing conferences. Can someone be blackballed? Would we want that?
There have been experiments with or suggestions for making reviews public in some research communities. I've seen some backlash. e.g., couple of years ago CSCW wanted to publish meta-reviews along with papers. There are reasonable concerns about anonymity, etc.
I've seen many other critiques of peer review generally, so rather than rehash them here's a list from a blog post from @amyjko that also suggests a new vision for peer review that is more open and transparent. medium.com/bits-and-behav…
Coincidentally, this morning I watched @asbruckman's #ICWSM2020 keynote that, among other interesting points, compared peer review to Wikipedia editing and guess which comes out sounding like the better review system. :)
When I tell people how "peer" review works in law reviews they look at me like I'm bananas. But seriously, small armies of law students really DO read every source cited and ensure its credibility and appropriateness.
During the last open access fiasco, I did issue a provocation about what it might look like to run journals kind of like law reviews, though it doesn't get closely at the structuring of reviewing itself.
I think that more open peer review systems could help with the academic misconduct issues. In some ways it might be easier to "collude" if anyone can review anything, but also easier to catch? Of course, openness also means more opportunities for bias and other problems...
To (finally) wrap up, I think there are a lot of ways that our current peer review system is breaking down, and the illustration of how it can be purposefully misused is another example. That said, completely overhauling it will of course bring a different set of problems.
But I think that it's really important that we have these conversations and continue to surface problems even when there aren't clear sweeping solutions so that we can at least start making small changes to see what happens instead of relying on "we've always done it this way."
For example, I would like to think that if there were more transparency in reviews, it might be easier to spot collusion rings. There are very good reasons for anonymity in the process, but it's at least worth considering the trade-offs or potential solutions that preserve both.
Easy wish list:
(1) incentives for good reviewing
(2) consequences for bad reviewing
(3) training/scaffolding for reviewing
Harder wish list:
(1) more open processes that don't rely on small "in groups" of reviewers
(2) more transparency for reviews
(3) everyone have integrity ❤️
per @e_mln_e's reminder, because YES: on my intermediate wish list there is one item and it is "better interfaces/software for reviewing"
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Casey Fiesler, PhD, JD, geekD

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!