However, they are not equally likely to produce replicable or useful scientific inference.
This is the strategy nearly exclusively favored by epidemiology in observational data.
This is the strategy nearly exclusively favored in econometrics.
While you certainly can make both strategies be useful, they aren't equally likely to do so.
These are only the biggest, most existential study design threats. If only ONE of these fails, the whole model fails.
If you missed even one, your model is likely to be severely biased. If you miss more than one, your model is likely to be severely biased in unknown directions (but never randomly).
Missing just one is likely to result in severe bias.
Missing just one of these is likely to result in severe bias.
There are extremely limited circumstances in which this is reasonable and would produce anything other than noise. In humans, it's virtually non-existent.
Which is also really really ridiculously difficult, and has a lot of the same problems as above, with two extremely important notable differences:
Bias avoidance is a strategy to reduce the problem to something manageable.
Often, people believe that the problem is important enough to try anyway.
That's the rough equivalent of Leeroy Jenkinsing into a problem where actual people's decisions are impacted.
It's often better to do nothing at all, and embrace that some questions can't be answered w/ stats.
That's true even among many in the "causal revolution" epi crowd.
And I agree with all of the above, but with a notable caveat: often the best we can is either nothing or VASTLY more expensive.
Inevitably, that will lead us mostly toward questions answerable by bias avoidance.
It took three truly brilliant superstars a few decades to get the field of economics to recognize its past failures, and pave a new way forward.
But as far as I can tell, none are publicly questioning the fundamental beliefs and methods institutionalized in the field.
I'm an outsider so I don't really count. But maybe you do?
I did NOT mean to imply either that there are no good epi studies (there are lots), nor that controlling for stuff never works.
It can (and does) work great as a secondary strategy, i.e. when you are in a scenario in which the bias has already been avoided to manageable levels.