A new, comprehensive preregistered meta-analysis found that, whether the diversity was demographic, cognitive, or occupational, its relationship with performance was near-zero.
These authors were very thorough
Just take a look at the meta-analytic estimates. These are in terms of correlations, and they are corrected for attenuation
These effect sizes are significant due to the large number of studies, but they are very low, even after blowing them up
You may ask yourself: are there hidden moderators?
The answer looks to be 'probably not.' Team longevity, industry sector, performance measures, power distance, year or country of study, task complexity, team interdependence, etc.
None of it really mattered.
Here's longevity:
Here's power distance:
Here's collectivism:
But let's put this into practical terms.
Using these disattenuated effects, if you selected from two groups you expected to have comparable performance otherwise, but one was more diverse, you'd make the 'correct' (higher-performing) decision in 51% of cases (vs. 50%).
That assumes there really hasn't been any bias in what gets published. If there has been, you might want to adjust your estimate downwards towards zero, or upwards if you think the literature was rigged the other way.
The paper paints an unsupportive picture of the idea that diversity on its own makes teams more performant.
Mathematics education is a great way to see genetic stratification happening.
Individuals with higher educational attainment polygenic scores tend to do higher-level mathematics courses and persist with mathematics education for longer.
There also seems to be some moderation of this effect by school quality.
For instance, comparing high-quality and low-quality schools, persistence (going beyond geometry) is about the same for high PGS (low PGS) students, but there are differences at lower (higher) PGS.
Policies that reduce material hardship can claim to have some effects on property crime, but the evidence that policies can reliably make a dent in violent crime is much weaker.
And this makes sense: violent crimes are typically crimes of passion, where someone goes into a rage in a moment, with no real, coherent motivations.
Since violent crime is such a major part of all recidivism, maybe that's why, e.g., jobs programs seem to do a poor job reducing recidivism rates:
p = 0.04 in a sample of 10 or p = 0.04 in a sample of 1,000,000?
🧵
Pick an answer, then go to the next post.
OK, now you've answered, and I hope you answered correctly: p = 0.04 in a sample of 10 will generally be much more convincing than that same p-value in a sample of 1,000,000 people.
Expressing the effects of different medications after converting them to correlations might help people to overcome their reflexive disdain for small but reliable correlations, since "small" effects are frequently extremely meaningful.
Here are some examples:
The need to place effects in practical terms is particularly pressing when a very large effect in the real world takes on what seems to be a small in terms of common effect size criteria.
It's important to remember, those criteria are arbitrary.
Another way to express the issue rejecting small effects:
"One of the most effective drugs, prescribed to millions of people, clearly saving and improving millions of lives has an effect like this scatterplot?"