A new, comprehensive preregistered meta-analysis found that, whether the diversity was demographic, cognitive, or occupational, its relationship with performance was near-zero.
These authors were very thorough
Just take a look at the meta-analytic estimates. These are in terms of correlations, and they are corrected for attenuation
These effect sizes are significant due to the large number of studies, but they are very low, even after blowing them up
You may ask yourself: are there hidden moderators?
The answer looks to be 'probably not.' Team longevity, industry sector, performance measures, power distance, year or country of study, task complexity, team interdependence, etc.
None of it really mattered.
Here's longevity:
Here's power distance:
Here's collectivism:
But let's put this into practical terms.
Using these disattenuated effects, if you selected from two groups you expected to have comparable performance otherwise, but one was more diverse, you'd make the 'correct' (higher-performing) decision in 51% of cases (vs. 50%).
That assumes there really hasn't been any bias in what gets published. If there has been, you might want to adjust your estimate downwards towards zero, or upwards if you think the literature was rigged the other way.
The paper paints an unsupportive picture of the idea that diversity on its own makes teams more performant.
This analysis has several advantages compared to earlier ones.
The most obvious is the whole-genome data combined with a large sample size. All earlier whole-genome heritability estimates have been made using smaller samples, and thus had far greater uncertainty.
The next big thing is that the SNP and pedigree heritability estimates came from the same sample.
This can matter a lot.
If one sample has a heritability of 0.5 for a trait and another has a heritability of 0.4, it'd be a mistake to chalk the difference up to the method.
The original source for the Medline p-values explicitly compared the distributions in the abstracts and full-texts.
They found that there was a kink such that positive results had excess lower-bounds above 1 and negative results had excess upper-bounds below 1.
They then explicitly compared the distributional kinkiness from Medline to the distributions from an earlier paper that was similar to a specification curve analysis.
That meant comparing Medline to a result that was definitely not subject to p-hacking or publication bias.
I got blocked for this meager bit of pushback on an obviously wrong idea lol.
Seriously:
Anyone claiming that von Neumann was tutored into being a genius is high on crack. He could recite the lines from any page of any book he ever read. That's not education!
'So, what's your theory on how von Neumann could tell you the exact weights and dimensions of objects without measuring tape or a scale?'
'Ah, it was the education that was provided to him, much like the education provided to his brothers and cousins.'
'How could his teachers have set him up to connect totally disparate fields in unique ways, especially given that every teacher who ever talked about him noted that he was much smarter than them and they found it hard to teach him?'
This study also provides more to differentiate viral myocarditis from vaccine """myocarditis""", which again, is mild, resolves quickly, etc., unlike real myocarditis.
To see what it is, first look at this plot, showing COVID infection risks by time since diagnosis:
Now look at risks since injection.
See the difference?
The risks related to infection hold up for a year or more. The risks related to injection, by contrast, are short-term.