There has never been a time in science when most published findings were true. And yet, many disciplines managed to have a profound impact nonetheless, why?
My take (which is probably wrong but I want to know why) is most of the retrospectively big successes in science were coupled to big successes in engineering.
In order to find out "what's true," you have to find out "what works."
Maybe a dark example here: which is better evidence of the theory of relativity? The Michelson–Morley experiment or the nuclear bomb?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
There was a thread on here last week where two Turing award winners are jovially promoting the idea that mathematical statistics is better at determining causation than common sense. 1/10
We have known for at least 50 years that this is wrong, and yet academics continue to push this illusion.
Here are some of my favorite critiques… I’ll give 6 because I don’t believe in 93 part threads. 2/10
Meehl, 1978. "Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology." psycnet.apa.org/record/1979-25… 3/10
I have a recommended reading list for Artificial Intelligence, and it hasn't changed since 2019. I give this list to my grad students, but all of the articles are broadly accessible if you're interested. Very short 🧵.
1) Ted Chiang's critique of the threat of superintelligence.
The IBM 704 was the most amazing general purpose AI computer ever made.
Released in 1954, the 704 could compute 12 thousand floating point operations a second. And it ran on vacuum tubes.
The 704 was absurdly ahead of Moore’s Law scaling. In the imaginary world where Moore’s Law applied to tubes, we’d today have 80 petaflop workstations... though they’d probably need to be cooled with liquid helium.
By now on this fine Friday you have all seen this terrible MMWR study. It has many flaws, but I particularly am incensed that it insults my favorite observational design, the test-negative control design.
Test-negative control is used to evaluate vaccine effectiveness. It attempts to avoid confounding by restricting its subjects to those seeking medical attention.
People who seek treatment for a respiratory infection get tested for a disease. Those that test negative are the control group, and those that test positive are the treatment group.
The intro will be in plain English and will usually claim some "counterintuitive surprise." But all of the main math results will be summarized in table 1 or table 2. 2/9
Each number in these tables represents an estimated quantity from some completely implausible statistical model the authors cooked up to "control for confounding effects." Usually the first entry is the only one you should care about. 3/9
What are examples of observational data analysis leading to widespread, faulty scientific consensus?
Context: I ask because I just re-read a lovely essay "On Types of Scientific Inquiry: The Role of Qualitative Reasoning" by David Freedman. 2/10
Freedman goes through several impressive case studies whereby scientists made revolutionary discoveries on the basis of observational epidemiological data alone. 3/10