How bad are Richard Lynn's 2002 national IQ estimates?
They correlate at r = 0.93 with our current best estimates.
It turns out that they're really not bad, and they don't provide evidence of systematic bias on his part🧵
In this data, Lynn overestimated national IQs relative to the current best estimates by an average of 0.97 points.
The biggest overestimation took place in Latin America, where IQs were overestimated by an average of 4.2 points. Sub-Saharan Africa was underestimated by 1.89 pts.
Bias?
If you look at the plot again, you'll see that I used Lynn's infamously geographically imputed estimates.
That's true! I wanted completeness. What do the non-imputed estimates look like? Similar, but Africa does worse. Lynn's imputation helped Sub-Saharan Africa!
If Lynn was biased, then his bias had minimal effect, and his much-disdained imputation resulted in underperforming Sub-Saharan Africa doing a bit better. Asia also got a boost from imputation.
The evidence that Lynn was systematically biased in favor of Europeans? Not here.
Fast forward to 2012 and Lynn had new estimates that are vastly more consistent with modern ones. In fact, they correlate at 0.96 with 2024's best estimates.
With geographic imputation, the 2012 data minimally underestimates Sub-Saharan Africa and once again, whatever bias there is, is larger with respect to Latin America, overestimating it.
But across all regions, there's just very little average misestimation.
Undo the imputation and, once again... we see that Lynn's preferred methods improved the standing of Sub-Saharan Africans.
There's really just nothing here. Aggregately, Lynn overestimated national IQs by 0.41 points without imputation and 0.51 with. Not much to worry about.
The plain fact is that whatever bias Lynn might have had didn't impact his results much. Rank orders and exact estimates are highly stable across sources and time.
It also might need to be noted: these numbers can theoretically change over time, even if they don't tend to, so this potential evidence for meager bias on Lynn's part in sample selection and against in methods might be due to changes over time in population IQs or data quality.
It might be worth looking into that more, but the possibility of bias is incredibly meager and limited either way, so putting in that effort couldn't reveal much of anything regardless of the direction of any possible revealed bias in the estimates (not to imply bias in estimates means personal biases were responsible, to be clear).
Some people messaged me to say they had issues with interpreting the charts because of problems distinguishing shaded-over colors.
If that sounds like you, don't worry, because here are versions with different layering:
• • •
Missing some Tweet in this thread? You can try to
force a refresh
There's a common type of misunderstanding that sounds like this:
"If taller people tend to be more educated, and women tend to be shorter than men, how do you explain women tending to be more educated?"
The issue has to do with intercepts. Consider this plot:
You can see that, among Whites, women tend to be shorter than men, and they tend to have lower earnings.
But at the same time, to similar degrees in both sexes, taller people tend to have higher earnings.
Perplexed? You shouldn't be.
The fact is that there's more to this that differentiates men and women than height, so the intercept for women is shifted down, even though the slopes of the height * income relationship are fairly comparable.
Debate about the value of essays in college admissions missed a key point:
Essays are biased, so should not be used.
Here's an example: High-income people know 'what to write' to look good to raters, so they outperform on essays relative to their other qualifications.
This shows up by race, too, and that's why admissions departments use essays to infer race for the express purpose of discriminating.
Write that you're Black; that you grew up as a poor immigrant; that you're gay or a cripple.
The reason essays do not have a role to play in the admissions process is because they're biased. It's plain, it's simple, it doesn't need to be discussed any further.
And here's some good policy: Use tools that are not biased or lose public funding.
Happy Autism Awareness Day! I think too many people are 'aware' of autism.
Have you ever met someone who claims to be autistic, but they've never been diagnosed?
Self-reported autism spectrum disorder (ASD) is practically uncorrelated with real, clinician-diagnosed autism🧵
Sort self-reporters into those with high and low ASD scores, and you get the bars on the left. The "high-trait" self-reporters look like people with diagnosed autism (ASD column).
But they're more socially anxious (middle) and avoidant (right).
So far, the means of distinguishing diagnosed from self-reported autistics have been crude.
To get a more nuanced understanding of their differences, we have to look at behavior.
For that, we'll start with the social control task.
Ending credentialism means affirmative action will become less harmful, and you can be more confident that your doctor is qualified rather than someone who replaced a qualified person in the pipeline.
Without credentialism, women's ability to select on educational credentials will be impaired, and they'll have to make better judgments.
Men aiming to leverage credentialism in the dating market in their favor will lose that edge, too. But that's good, because it's a lousy edge.