We have a broken system in science. Currently, the main bulwark still protecting science from total collapse in the US is the NIH scientific review system.
I know this may sound extreme. But hear me out. 1/N
For-profit “open-science” publishers make exponentially more money each year as they exploit the incentives of our system, in which authors are willing to pay to publish their papers and need to publish as many as possible to increase their own research impact factors. 2/
“The literature environment published in Chinese is already ruined, since hardly anyone believes them or references studies from them… Now this plague has eroded into the international medical journals.” nature.com/articles/d4158… 3/
The peer review process is now a sham process at many journals. This is not just at journals on the “predatory journal” blacklist. Years ago, I served as editor of a special issue editor at Frontiers and discovered there was no way in their system to reject a paper! 4/
Perhaps it was possible by asking for an ‘exception’, but it was not what editors were supposed to do. I realized that this was a for-profit company and so of course they want as many authors to pay them to publish as many papers as possible. 5/
Other for-profit publishers are quickly discovering the cash cow that is academic “open science” publishing - for instance, see MDPI’s crazy fast growth: paolocrosetto.wordpress.com/2021/04/12/is-… 6/
When I think about it, this is the natural outcome of a “publish-or-perish” academic system. For profit publishers were already making staggering amounts of profits before the ‘open science’ movement. 7/
For instance, “The academic publishing industry has a large financial turnover. Its worldwide sales amount to more than USD 19 billion, which positions it between the music industry and the film industry” tidsskriftet.no/en/2020/08/kro… 8/
But it used to be that university libraries and individual subscriptions funded most of these billions and that publishers had a strong incentive to publish quality as their impact factor and reputation would influence their subscription numbers. 9/
Now researchers can pay directly to have their work published. And this means that the quality of the research matters a lot less to publishers than it did before. Quantity is the new name of the game. 10/
The NIH is the only major system left where the financial incentives still favor quality work. Reviewers of grants have to be prepared to answer questions from other reviewers about the proposals they reviewed and so they read them carefully. 11/
Reviewers have no financial incentives to favor some grants over others, and at least at the NIA, the agency I’m familiar with, reviewer feedback and scores are the most important determinant of whether a grant is funded. 12/
While fraudulent and lousy research can make it through the NIH gauntlet of scrutiny, it is not easy to do so, unlike the paper mill infiltration of our current publishing system: 13/
“To date, it is likely that misconduct, in all its forms, could be one of the biggest threats to academia's integrity, and this directly impacts the public, especially if such research is medical, affecting the lives and well-being of society.” ncbi.nlm.nih.gov/pmc/articles/P… 14/
University promotion systems are supposed to reward quality, but by focusing on quantitative metrics such as number of papers published and impact factors, they ignore fundamental questions that matter for the field: Is the research trustworthy? Can it be replicated? 15/
I’m sorry to report that in the tenure committees I’ve served on, we focused on the easily quantifiable metrics (impact factors, # publications, grants) than on the research quality itself (harder to evaluate - it requires evaluating work in fields we are not experts in). 16/
What can we do? I believe we need to change the university promotion system incentives. Apparently this is starting: nature.com/articles/d4158… 17/
What should we prioritize instead of (or in addition to) a researchers’ number of publications or impact factor? What about the number of times their work has been replicated? With some credit also given for those who replicate work… 18/
Or the number of times the data set they produced has been used in publications by other researchers? The objective would be to make the incentives for tenure and promotion line up with what benefits science and society rather than the fame of the individual researcher. 19/
In summary, I believe we desperately need fundamental reform to save science from the for-profit companies whose financial incentives run counter to the goal of producing high quality, replicable science. What is at stake is the integrity of our published literature. 20/20
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This paper troubled me when it came out because it did not line up with my experience on a grant review panel where most reviewers (entering scores before seeing other reviews) tend to have similar scores most but not all the time. pnas.org/content/115/12… 1/
Rereading it, I noticed their sample of 25 grants that they had reviewers review had all been funded by NIH. That means those grants were in the top 20% or so of the scored grants. There were no previously poorly evaluated grants in the pool. 2/
So, yeah, duh, no surprise that there was not much consensus among reviewers of these top-rated grants as to which were the super top ones. They were missing the other 100 or so grants they would need to represent the full range of the scale. 3/
This paper is misleading as they solicited grant applications from investigators and their entire set consisted of funded grants. So they did not have any poor quality grants (or even any that did not get funded) in their pool.
While the NIH process is not perfect, I am impressed by how carefully reviewers read proposals and know that my colleagues put a huge amount of effort into grant reviews.
Usually when I'm on a panel, I find my scores are not too different from the other reviewers, but when we disagree, we get the chance to air our disagreements in front of the rest of the room and everyone else can decide for themselves what they think.
I co-reviewed for Cells with a lab member. I thought it was for the journal Cell until I submitted it at the reviewer online form. I felt duped. We gave extensive feedback and the authors responded in detail. The editor asked us to review the revision in just 3 days?!
I explained I'm working on a grant deadline and cannot do it that quickly. She was not willing to give me the extra time I needed. She told me to quickly review their response letter...
... and said, "If you don't have enough time to check the revised version, please feel free to let me know. We will ask our academic Editor to check if your concerns are all addressed." Our joint review was the only review received for this paper.