The speed and volume of COVID papers was and continues to be a shameful disaster.
But rather than opportunism; the bigger problem is a system that takes well-meaning people "helping" in ways that are, by and large, worse than doing nothing at all.
Ain't just researchers. Back at the beginning of the pandemic, I was very into a maker/hackerspace.
Huge outpouring of well-meaning "helping" by doing things like 3d printing ventilators and whatnot.
But little doing the hard homework of seeing if they were actually useful.
It isn't enough to rip a print off thingiverse. You have to have expert involvement to make sure they are well designed for the purpose, fit into existing systems, and actually work.
If you don't, you flood the system with worse than useless "help" that is actually a burden.
Said makerspace eventually shaped up thanks to a few people stepping up to take charge, coordinate with bigger efforts, and prevent things from going worse (and I think did some good)
Wish we could say the same for academic research.
Nothing new here either; this is an old problem. COVID just made these existing major existential problems more intense and more consequential.
Unless our leaders in the scientific community make major changes, we'll be doomed to the same every time.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Periodic reminder that our current peer review *institutions* != the whole concept of peer review.
Yes, our *current peer review institutions* are an utter disaster, but that implies exceedingly little about other implementations of peer review.
If I had to guess, the vast majority of claims that "peer review doesn't work" fail to separate implementation from concept and potential.
Any institution that is underemphasized, underfunded, hobbled, opaque, arbitrary, and easily manipulated tends to produce bad results.
Peer review can absolutely be and often is very effective for research improvement and curation when implemented well (particularly when based on guidance, embracing subjectivity, professionalized, rewarded etc).
That's just not the peer review we find in most journals today.
New project on causal language and claims, and I want you to see how everything goes down live, to a mind-boggling level of transparency.
That includes live public links to all the major documents as they are being written, live discussion on major decisions, etc.
HERE WE GO!
Worth noting: this is the second time I've tried this kind of public transparency; the previous paper got canned due to COVID-related things.
NEW STUDY TIME!
Here's the idea (at the moment, anyway): health research has a very complicated relationship with "causal" language.
There is a semi-ubiquitous standard that if your study isn't the right method or isn't "good enough" to for causal estimation, you shouldn't use the word cause, but instead just say things are association/correlated/whatever, and you're good to go.
Systematic reviews and meta-analyses are like plywood. While they often have a pretty veneer, they are only as useful as the layers of materials they are made of and how it's all put together.
In this essay I will
Hm, might do this, since plywood is so, so much cooler than people give it credit for, and there's some good analogy making with how cross-grain layers are complementary and hold things in check.
I have nerd sniped myself.
Ugh, fine, I'll do it.
Systematic reviews and meta-analyses are like plywood, and plywood is crazy cool stuff that I bet you never even thought about.
FWIW, having "grown up" in econ (and now spending 90% of my time in a different field entirely), this statement strikes me as a pretty accurate description of economists as a whole, and a major source of inter-field friction.
I do think that there is something to the fungibility of a lot of econ-style frameworks and ways of approaching problems, BUT in combination with hyperconfidence it gets econs (including me) in trouble.
I've had to learn to unlearn a lot of that hyperconfidence.
Note: fungibility is NOT AT ALL the same thing as superiority, and I think that particular line may be where the error is.
I (clearly) think there is a HUGE amount of untapped value in bridging disciplinary gaps, as indicated on the fact that I've bet the farm on it.
Now that everyone is (justifiably) up in arms about CurateScience, may I turn your attention to SciScore™, claimed to be "the best methods review tool for scientific articles." (direct quote, plastered all over their website)