Connor Rosen Profile picture
Biotech scientist. Former grad student @YaleIBIO.

Jan 8, 2019, 28 tweets

@eLife piloted an exciting author-centric approach, and just reported first results! elifesciences.org/inside-elife/2…. Time to ask the big questions… What happened? Did it “work”? Is the author-centric model “better”? Who does it “help”? I’ll dive in (long tweetstorm ahead…)

What happened? @eLife trialed a model of author-centric publishing (elifesciences.org/inside-elife/2…) - this means authors choose *whether* and how to respond to peer reviews, and whether to publish based on reviews, independently of the editor or reviewers.

This means, as eLife stated at the time, “the decision to send a manuscript to external referees for peer review will be tantamount to accepting it for publication” (assuming all authors opt to publish after receiving their reviews).

A lot of dialog at the time centered around people’s opinions of the relative gatekeeper roles of journal, editors, and reviewers, since this places more power in the editors’ hands. At the time, I wondered what would constitute “success” of the trial ().

In the absence of agreed on metrics for what the *ideal* situation is, there’s at least very thorough data reported by @eLife. Throughout the tweetstorm I’ll chip in with my *opinions* on whether this improved the publishing situation, from a third-party perspective.

I’m also excited to see the qualitative feedback, surveys from the authors, editors, and reviewers. Very glad @eLife seems dedicated to transparency in this process. One anecdote from @jbkinney as an author in this trial (). But now, on to numbers…

*MAJOR CAVEAT*, this was not a randomized trial! There are many ways to imagine selection bias in the trial group. Did people with more confidence opt into the trial? Is the trial populated by researchers with “unusual” opinions about publishing? Etc… With that out of the way…

Bottom line results: over 43 days, ~32% of submissions opted into the trial (313 vs 665 regular). 22% of submissions in the trial went out into review, compared to ~30% for regular papers. Note the historical acceptance rate for eLife is around 15%.

So, here’s our first chance to assess - did this make things better? Certainly editors are acting more as “gatekeepers”, by rejecting more papers outright, as was expected. But if the trial authors all opt to publish, more papers will be published in eLife.

At this time, 19 papers from the trial have been published (elifesciences.org/articles/resea…). 18/19 addressed all of the reviewers’ concerns, and one had a note that “minor issues remain unresolved”. That’s a far cry from 50% being rejected in peer review.

This is key. Is there a selection bias in the reviewers who opt in to the trial for “nicer” reviewers? Are reviewers, knowing that they can’t actually reject a paper, more constructive (or at the very least, less harsh)?

As long as reviewers agreed to the trial conditions before seeing the actual manuscript, it can’t be that reviewers who wanted to reject the paper opted out - that suggests that, given no option, reviewers will find constructive ways to improve a paper instead of rejecting it.

If this holds up across the full set of trial papers, that would suggest that peer review is rejecting a substantial fraction of papers “unnecessarily”. That is, almost all papers sent out for review can meet what reviewers are willing to sign are “acceptable standards”.

So, *my opinion* is this indicates a rousing success in improving the rate of entry of good science into the literature. By removing reviewers ability to reject papers, we see that the authors still put in the effort to have a good paper, without having to resubmit and delay.

I think getting more “good” science into the literature more quickly (and it’s very easy to see whether it was reviewed as “good” - the editor summary is right next to the abstract!) is admirable, so I would consider that this suggests things are improving.

I won’t go into the country / subject breakdowns, or appeals rate. Appeals are a very small number, so hard to draw any real conclusions, there’s no major interest to me in the subject breakdowns, and I can’t really speculate as to what drives the country differences.

The meaty demographics come next - gender and career stage. A lot of discussion in alternates to standard peer review focuses on whether biases of various kinds are managed by different peer review strategies (double-blind, etc). So these are important.

Gender: Females participated slightly less, but their encouragement rate (% sent out for review) in the trial was essentially identical to men, a slight improvement over the regular submissions where men had a couple percent higher encouragement rate.

Small numbers, it’s hard to tell if it’s a real improvement or noise - but if it’s real, then any amount of improvement is worth it!

Now for career stage… Which has provoked plenty of reactions! @jschoggins (), @GaetanBurgio (), @MHend1cks (, with a poll!), @wormsense () - among many others.

The one-line is that the process seems to favor senior researchers (with the caveat that a big chunk of authors couldn’t be assigned to a career stage). Late career researchers opt in to the trial at a lower rate than early or mid career, but have the highest encouragement rate.

For late career researchers, the encouragement rate was very similar between regular and trial submissions, while early and mid career researchers saw substantial drops if they opted into the trial. So it seems at first that this initiative hurts early/mid-career researchers.

However, it’s not a zero-sum game. More papers come out under this method. Does that mean early/mid career researchers (EMCRs) might actually be better off anyways? Does making the pie bigger help everyone? (I’m asking genuinely - I’m only a grad student, I don’t know!)

Without knowing the acceptance rates for regular submissions by career stage (@eLife, will this be part of the follow-up?), it’s hard to be sure. If it’s uniform 50% at all career stages, there are still a higher percentage of EMCRs submissions being published under this scheme.

I may not agree with @GaetanBurgio that the take away is “Not looking good for #EMCRs”, but that may be my naiveté about pie sizes and zero-sum nature of the game. Regardless, it doesn’t fix an imbalance of late vs early/mid career researchers. Still room to improve.

Next: time and work. The editors spent more time with trial papers (a few days, on average), which makes sense given the higher stakes. More editors were involved per trial paper (about twice as many papers had 2 extra reviewers commenting), again unsurprising.

That’s it! Obviously there’s room to improve in terms of late vs EMCRs, although it may not be total doom-and-gloom. Gender bias is (maybe) helped slightly. Country demographics will bear further investigating, as well as the long-term follow up and qualitative feedback.

Thanks for the effort and transparency, @eLife! I look forward to more data, more blog posts, and more discussion! So far, I think it looks like there’s really awesome potential and it may be improving things and helping push science forward meaningfully and quickly!

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling