there's a threshold that's 0.67 SDs (10 points) above the higher-performing of two groups with equal variances who are separated by 0.97 d.
With simulated group sizes of one million persons each, the mean differences decline, and the SDs do too. The new gap is 0.412 d.
But we know that the 0.97 d gap is an underestimate due to range restriction.
Using MBE scores, it looks like the unrestricted gap should be more like 1.22 d. That leaves us with a 0.537 d gap above the threshold.
Do we have subsequent performance measures?
Yes! We have three:
- Complaints made against attorneys
- Probations
- Disbarments
For men, the gaps, in order, are 0.576, 0.513, and 0.564 d. For women, the gaps are 0.576, 0.286, and 0.286 d.
Men fit expectations and women apparently needed less discipline.
These gaps probably replicate nationally.
For example, here are Texas pass rates from 2004 - a 0.961 d Black-White first-pass gap. The 2006 update to these figures raised the gap to 0.969 d.
Those figures are basically in line with LSAC's national study of Bar exam pass rates.
And those are basically in line with New York's gaps.
And this should probably be expected, since tests measure the same things.
Since all of the people included in these statistics went to ABA-accredited schools, they all had the opportunity to learn what was required to perform well on these tests.
But just like the Step examinations for medical doctors, the gaps on the tests and in real life remain.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This analysis has several advantages compared to earlier ones.
The most obvious is the whole-genome data combined with a large sample size. All earlier whole-genome heritability estimates have been made using smaller samples, and thus had far greater uncertainty.
The next big thing is that the SNP and pedigree heritability estimates came from the same sample.
This can matter a lot.
If one sample has a heritability of 0.5 for a trait and another has a heritability of 0.4, it'd be a mistake to chalk the difference up to the method.
The original source for the Medline p-values explicitly compared the distributions in the abstracts and full-texts.
They found that there was a kink such that positive results had excess lower-bounds above 1 and negative results had excess upper-bounds below 1.
They then explicitly compared the distributional kinkiness from Medline to the distributions from an earlier paper that was similar to a specification curve analysis.
That meant comparing Medline to a result that was definitely not subject to p-hacking or publication bias.
I got blocked for this meager bit of pushback on an obviously wrong idea lol.
Seriously:
Anyone claiming that von Neumann was tutored into being a genius is high on crack. He could recite the lines from any page of any book he ever read. That's not education!
'So, what's your theory on how von Neumann could tell you the exact weights and dimensions of objects without measuring tape or a scale?'
'Ah, it was the education that was provided to him, much like the education provided to his brothers and cousins.'
'How could his teachers have set him up to connect totally disparate fields in unique ways, especially given that every teacher who ever talked about him noted that he was much smarter than them and they found it hard to teach him?'
This study also provides more to differentiate viral myocarditis from vaccine """myocarditis""", which again, is mild, resolves quickly, etc., unlike real myocarditis.
To see what it is, first look at this plot, showing COVID infection risks by time since diagnosis:
Now look at risks since injection.
See the difference?
The risks related to infection hold up for a year or more. The risks related to injection, by contrast, are short-term.