there's a threshold that's 0.67 SDs (10 points) above the higher-performing of two groups with equal variances who are separated by 0.97 d.
With simulated group sizes of one million persons each, the mean differences decline, and the SDs do too. The new gap is 0.412 d.
But we know that the 0.97 d gap is an underestimate due to range restriction.
Using MBE scores, it looks like the unrestricted gap should be more like 1.22 d. That leaves us with a 0.537 d gap above the threshold.
Do we have subsequent performance measures?
Yes! We have three:
- Complaints made against attorneys
- Probations
- Disbarments
For men, the gaps, in order, are 0.576, 0.513, and 0.564 d. For women, the gaps are 0.576, 0.286, and 0.286 d.
Men fit expectations and women apparently needed less discipline.
These gaps probably replicate nationally.
For example, here are Texas pass rates from 2004 - a 0.961 d Black-White first-pass gap. The 2006 update to these figures raised the gap to 0.969 d.
Those figures are basically in line with LSAC's national study of Bar exam pass rates.
And those are basically in line with New York's gaps.
And this should probably be expected, since tests measure the same things.
Since all of the people included in these statistics went to ABA-accredited schools, they all had the opportunity to learn what was required to perform well on these tests.
But just like the Step examinations for medical doctors, the gaps on the tests and in real life remain.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
After the Counter-Reformation began, Protestant Germany started producing more elites than Catholic Germany.
Protestant cities also attracted more of these elite individuals, but primarily to the places with the most progressive governments🧵
Q: What am I talking about?
A: Kirchenordnung, or Church Orders, otherwise known as Protestant Church Ordinances, a sort of governmental compact that started cropping up after the Reformation, in Protestant cities.
Q: Why these things?
A: Protestants wanted to establish political institutions in their domains that replaced those previously provided by the Catholics, or which otherwise departed from how things were done.
What predicts a successful educational intervention?
Unfortunately, the answer is not 'methodological propriety'; in fact, it's the opposite🧵
First up: home-made measures, a lack of randomization, and a study being published instead of unpublished predict larger effects.
It is *far* easier to cook the books with an in-house measure, and it's far harder for other researchers to evaluate what's going on because they definitionally cannot be familiar with it.
Additionally, smaller studies tend to have larger effects—a hallmark of publication bias!
Education, like many fields, clearly has a bias towards significant results.
Notice the extreme excess of results with p-values that are 'just significant'.
The pattern we see above should make you suspect if you realize this is happening.
Across five different large samples, the same pattern emerged:
Trans people tended to have multiple times higher rates of autism.
In addition to higher autism rates, when looking at non-autistic trans versus non-trans people, the trans people were consistently shifted towards showing more autistic traits.
In two of the available datasets, the autism result replicated across other psychiatric traits.
That is, trans people were also at an elevated risk of ADHD, bipolar disorder, depression, OCD, and schizophrenia, before and after making various adjustments.
Across 68,000 meta-analyses including over 700,000 effect size estimates, correcting for publication bias tended to:
- Markedly reduce effect sizes
- Markedly reduce the probability that there is an effect at all
Economics hardest hit:
Even this is perhaps too generous.
Recall that correcting for publication bias often produces effects that are still larger than the effects attained in subsequent large-scale replication studies.
A great example of this comes from priming studies.
Remember money priming, where simply seeing or handling money made people more selfish and better at business?
Those studies were stricken by publication bias, but preregistered studies totally failed to find a thing.
It argues that one of the reasons there was an East Asian growth miracle but not a South Asian one is human capital.
For centuries, South Asia has lagged on average human capital, whereas East Asia has done very well in all our records.
It's unsurprising when these things continue today.
We already know based on three separate instrumental variables strategies using quite old datapoints that human capital is causal for growth. That includes these numeracy measures from the distant past.
Where foreign visitors centuries ago thought China was remarkably equal and literate (both true!), they noticed that India had an elite upper crust accompanied by intense squalor.