FWIW, having "grown up" in econ (and now spending 90% of my time in a different field entirely), this statement strikes me as a pretty accurate description of economists as a whole, and a major source of inter-field friction.
I do think that there is something to the fungibility of a lot of econ-style frameworks and ways of approaching problems, BUT in combination with hyperconfidence it gets econs (including me) in trouble.
I've had to learn to unlearn a lot of that hyperconfidence.
Note: fungibility is NOT AT ALL the same thing as superiority, and I think that particular line may be where the error is.
I (clearly) think there is a HUGE amount of untapped value in bridging disciplinary gaps, as indicated on the fact that I've bet the farm on it.
There's also the reverse; some economists are much more prone to that kind of hyperconfident superior lane swerving than others, and they are VASTLY more visible than your median economist.
So many assume (wrongly, but not totally unreasonably) that every economist is like that.
So it's a real issue in both a general and localized sense, but perhaps leans a lot more local than many assume.
Honestly, the whole econ vs. epi "thing" over the past year has been really frustrating. So many missed opportunities for useful collaboration and cross-learning.
Also, please don't interpret this as being specific to Oster or her work. I realize that context matters, but this ^ ain't really about that.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Systematic reviews and meta-analyses are like plywood. While they often have a pretty veneer, they are only as useful as the layers of materials they are made of and how it's all put together.
In this essay I will
Hm, might do this, since plywood is so, so much cooler than people give it credit for, and there's some good analogy making with how cross-grain layers are complementary and hold things in check.
I have nerd sniped myself.
Ugh, fine, I'll do it.
Systematic reviews and meta-analyses are like plywood, and plywood is crazy cool stuff that I bet you never even thought about.
Now that everyone is (justifiably) up in arms about CurateScience, may I turn your attention to SciScore™, claimed to be "the best methods review tool for scientific articles." (direct quote, plastered all over their website)
At the risk of getting involved in a discussion I really don't want to be involved in:
Excepting extreme circumstances, even very effective or very damaging policies won't produce discernable "spikes" or "cliffs" in COVID-19 outcomes over time.
That includes school policies.
"There was no spike after schools opened" doesn't mean that school opening didn't cause (ultimately) large increases in COVID cases.
Similarly "There was no cliff after schools closed" doesn't really mean that the school closure didn't substantially slow spread.
That's one of the things that makes measurement of this extremely tricky; the effects of school policies would be expected to appear slowly over time, and interact with the local conditions over that period of time.
Full disclosure: I contribute every so often to the NCRC team under the fantastic leadership of @KateGrabowski and many others, and have been a fan of both NCRC and eLife since they started (well before I started helping).
At some point I'll do a long thread about why this small thing is a WAY bigger deal than it sounds, but to tease: this heralds active exploration of a fundamental and long overdue rethinking and reorganizing of how science is assessed and distributed.