, 25 tweets, 6 min read Read on Twitter
It has become clear to me since we published this paper a month ago that a fair number of people are under a misconception about the way(s) in which bias can occur in peer review.
tl;dr Some people seem to mistakenly think that bias is something that happens exclusively in the heads of reviewers, at the time of review. This is naive and potentially harmful.

Longer version:
Many review processes in academia explicitly or implicitly evaluate a period of time during which advantage or disadvantage may accumulate.

Explicit example: career awards explicitly evaluate track records.
Implicit example: when you submit a grant or fellowship application, the application reflects not only your ability to come up with interesting research questions and propose methodologically-sound ways to answer them, but also:
(1) your access to resources to generate preliminary data

(2) your previous ability to secure resources (past grant records, startup funds, protected time for research at your institution, equipment time, matching funds when those are required, etc.)
(3) your network, built through formal and informal connections, along with the invitations & opportunities that come from that network
(4) for people for whom the grant language is not their first language, your access to language editing help

(5) for people who are junior, your training environment and the opportunities you have/had there, possibly also your mentor’s network,
(6) your access to library resources (especially relevant for researchers studying/working in countries that have fewer resources due to colonization--thanks to @RutNdjab for noting this recently)

and many other factors.
There are a number of studies that have experimentally evaluated the potential for bias in the scientist’s/reviewer’s head using hypothetical scenarios.
Some of them have found that such bias may occur against members of underrepresented groups. For example, see this paper by Moss-Racusin and colleagues about hiring a lab manager. pnas.org/content/109/41…
Others have found no such bias; for example, this paper by Forscher and colleagues about NIH R01 applications. nature.com/articles/s4156…
Still others have found reviewers may even favour members of underrepresented groups. For example, see this paper by Williams & Ceci about faculty candidates. pnas.org/content/112/17…
These papers each address a similar question (is there bias in the heads of the people reviewing this file?) in different contexts (hiring a lab manager, reviewing a grant, hiring a new faculty member).
All of these papers describe what are, in my view, well-designed, adequately-powered experiments, with good methods. I’m glad these teams have done these studies. They are important to help to identify or rule out problems.
But (and here’s the important part) these kinds of hypothetical studies where you just substitute different names to signal different identity characteristics depend on *all else being equal*, which is rarely true in real life.
For Moss-Racusin and colleagues’ work to apply in real life, there must be completely equal chances to build CVs, and hiring & salary must be decided purely through CV review--no interview, no references, no referrals from a colleague. That is not how it works in real life.
For Forscher and colleagues’ work to apply in real life, PIs must have equivalent access to resources, networks, training environments, etc. That is not how it works in real life.
For Ceci & William’s work to apply in real life, faculty candidates must have equal opportunities to write papers as 1st author, equally strong letters of recommendation, be judged equivalently in their job talks, dinners, 1-on-1 interviews, etc. Not how it works in real life.
These experiments are excellent ways to test the presence or absence of very specific biases in the heads of reviewers at the time of review. That’s an important thing to test. But, unfortunately, it is not the only time at which people might experience advantage or disadvantage.
Even more unfortunately, advantage and disadvantage clearly compound over time ("the Matthew effect"), as shown in this clever analysis of people who scored just above and just below the payline for an early career award. pnas.org/content/115/19…
Side note: part of the issue shown there is that the people who were not quite funded were less likely to keep applying for things.
Of course, there are reasons outside people's control why they might stop applying, but I am reminded of the response I've received from the excellent @SDM_ULAVAL every time I've sent her a sad, "I didn't get this one" email: "On ne lâche pas!" (One doesn't give up!)
Anyway, final takeaway: People who support a level playing field in science, stop suggesting blind review as the answer to everything. It may help in some situations (e.g., papers) but may hurt in others. This goes for tech, too. See this excellent thread:
More bluntly: in cases where there's bias in the heads of reviewers, concealing applicants/authors' identities from reviewers is likely to help ensure rigorous, unbiased review. But in cases where there is cumulative advantage/disadvantage, it may not help & might even hurt.
For whatever it's worth, here are some other recommendations I made recently and here's an interesting paper about one funder's approach to ensure fair, rigorous peer review: thelancet.com/journals/lance…
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Dr. Holly Witteman
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!