There are people who desperately want this to be untrue🧵
One example of this came up earlier this year, when a "Professor of Public Policy and Governance" accused other people of being ignorant about SAT scores because, he alleged, high schools predicted college grades better.
The thread in question was, ironically, full of irrelevant points that seemed intended to mislead, accompanied by very obvious statistical errors.
For example, one post in it received a Community Note for conditioning on a collider.
But let's ignore the obvious things. I want to focus on this one: the idea that high schools explain more of student achievement than SATs
The evidence for this? The increase in R^2 going from a model without to a model with high school fixed effects
This interpretation is bad.
The R^2 of the overall model did not increase because high schools are more important determinants of student achievement. This result cannot be interpreted to mean that your zip code is more important than your gumption and effort in school.
If we open the report, we see this:
Students from elite high schools and from disadvantaged ones receive similar results when it comes to SATs predicting achievement. If high schools really explained a lot, this wouldn't be the case.
What we're seeing is a case where R^2 was misinterpreted.
The reason the model R^2 blew up was because there's a fixed effect for every high school mentioned in this national-level dataset
That means that all the little differences between high schools are controlled—a lot of variation!—so the model is overfit, explaining the high R^2
This professor should've known better for many reasons.
For example, we know there's more variation between classrooms than between school districts when it comes to student achievement.
Compared to men who, on paper, committed similar crimes, women tend to receive shorter criminal sentences.
On paper does a lot of work.
We know, for example, that sentencing gaps by race among males largely dissipate when accounting for severity and better measures of criminal priors, so I don't doubt the same might be true for women.
But a pro-female bias seems likely too.
But a revealing fact is that there are still substantial male-female sentencing gaps today.
But contrarily, the race-related gaps in sentencing have largely disappeared due to minimums, sentencing guidelines, etc.
The President just released a new policy that does some big things:
- It makes it easier for friendly nations to invest in the U.S.
- It makes it harder for hostile nations to invest in the U.S.
- It makes it harder for hostile nations to steal American technology
And more🧵
To understand why this Order is so big, you'll need a little bit of background.
First, you'll need to understand what the CFIUS, or the Committee on Foreign Investment in the U.S., does. They review foreign investments that might be of national security interest.
Second, people have been worried for a while about China buying up U.S. farmland and land near U.S. military bases.
Whether this is a real issue or not, it has prompted policy and endless articles.
Gould famously claimed that Samuel Morton lied about the sizes of various skulls in his extensive collection the American Golgotha in a way that was biased in favor of Whites.
So some people went and remeasured the skulls in 2011 and they found Gould wrong, Morton was accurate.
The degree to which Morton's measurements held up is so extreme that there is just no room for him to have been a biased measurer.
And this is true for all of the ancestry groups he classified the skulls into, indicating that Gould's criticism was totally off-base.
Gould also claimed Morton equated intelligence and cranial capacity and that his calculations were biased by failing to account for sex and stature.
There's no evidence for the former, and the latter was impossible for him due to how his collection was gathered.
This study is being investigated since it includes results by convicted fraud Stephen Breuning.
Without his huge, fake estimates, the meta-analysis is riddled with publication bias. Correcting for it makes the meta-analytic estimate practically and statistically nonsignificant.
It is also just unserious to think that a meta-analysis including obvious rubbish should overturn much better established facts.
For example, one of the cited studies claimed to show IQ scores improving by 3.64 g (55 IQ points) when kids (n = 10) were offered a $5 cash prize.
You reveal a lot about yourself if you take nonsensical and unreplicable results seriously.
This meta-analysis never should have been published because of the included fraudulent work, the included garbage work, and the failure to consider psychometric bias explaining results.
People across the political aisle engage in conspiracy theorizing at markedly similar rates, just about different things.
Q: Does each side do this to the same extent?
A: Probably not! In the case above, to get the appearance of total symmetry, you have to include a lot of different conspiracies that are very Trump-related.
Q: What about general conspiracist intent and ideation?
A: That's plausibly higher on the right in the U.S., even after accounting for measurement non-invariance. It's not globally higher, but few correlates of politics are globally consistent. More on this soon.