A new, comprehensive preregistered meta-analysis found that, whether the diversity was demographic, cognitive, or occupational, its relationship with performance was near-zero.
These authors were very thorough
Just take a look at the meta-analytic estimates. These are in terms of correlations, and they are corrected for attenuation
These effect sizes are significant due to the large number of studies, but they are very low, even after blowing them up
You may ask yourself: are there hidden moderators?
The answer looks to be 'probably not.' Team longevity, industry sector, performance measures, power distance, year or country of study, task complexity, team interdependence, etc.
None of it really mattered.
Here's longevity:
Here's power distance:
Here's collectivism:
But let's put this into practical terms.
Using these disattenuated effects, if you selected from two groups you expected to have comparable performance otherwise, but one was more diverse, you'd make the 'correct' (higher-performing) decision in 51% of cases (vs. 50%).
That assumes there really hasn't been any bias in what gets published. If there has been, you might want to adjust your estimate downwards towards zero, or upwards if you think the literature was rigged the other way.
The paper paints an unsupportive picture of the idea that diversity on its own makes teams more performant.
After the Counter-Reformation began, Protestant Germany started producing more elites than Catholic Germany.
Protestant cities also attracted more of these elite individuals, but primarily to the places with the most progressive governments🧵
Q: What am I talking about?
A: Kirchenordnung, or Church Orders, otherwise known as Protestant Church Ordinances, a sort of governmental compact that started cropping up after the Reformation, in Protestant cities.
Q: Why these things?
A: Protestants wanted to establish political institutions in their domains that replaced those previously provided by the Catholics, or which otherwise departed from how things were done.
What predicts a successful educational intervention?
Unfortunately, the answer is not 'methodological propriety'; in fact, it's the opposite🧵
First up: home-made measures, a lack of randomization, and a study being published instead of unpublished predict larger effects.
It is *far* easier to cook the books with an in-house measure, and it's far harder for other researchers to evaluate what's going on because they definitionally cannot be familiar with it.
Additionally, smaller studies tend to have larger effects—a hallmark of publication bias!
Education, like many fields, clearly has a bias towards significant results.
Notice the extreme excess of results with p-values that are 'just significant'.
The pattern we see above should make you suspect if you realize this is happening.
Across five different large samples, the same pattern emerged:
Trans people tended to have multiple times higher rates of autism.
In addition to higher autism rates, when looking at non-autistic trans versus non-trans people, the trans people were consistently shifted towards showing more autistic traits.
In two of the available datasets, the autism result replicated across other psychiatric traits.
That is, trans people were also at an elevated risk of ADHD, bipolar disorder, depression, OCD, and schizophrenia, before and after making various adjustments.