Among health "experts" who tweeted about Monkeypox, there was a dramatic tendency to get basic facts wrong.
For example, many claimed risk wasn't especially heightened among gay men.
PhDs were among the worst misinformation spreaders.
Being an "expert", being "credentialed", having "studied" something and so on, is not sufficient to make someone truly credible, to endow their words with reliability.
Being right is, and most popular "experts" were usually not right.
Anti-racism trainings probably lead people to accuse others of racism even when they're not racist.
That's exactly the result of a new study on DEI trainings, with a special focus on the impacts of the works of Ibram X. Kendi and Robin DiAngelo.
Let's dig in🧵
In the first experiment, the researchers took 324 participants and randomized them to either read an Ibram X. Kendi or Robin DiAngelo excerpt or to a racially-neutral condition where they read about corn.
Here are some excerpts from the reading materials, for your understanding:
After learning, for example, that western countries are compromised by virtue of their racist ideologies and pasts, participants were presented with a scenario that was totally racially neutral.
The scenario is described as follows, and everyone involved did nothing racist:
After you've read enough about how civil servants stopped Trump from governing, it's hard not to imagine what Biden's admin is doing at the OPM is at least un-American and maybe evil.
Consider how career lawyers at the EPA simply refused to tell Trump what the agency was doing:
Or that time that the Department of Labor had to write a regulation a competent attorney could have soloed in two weeks, but they told Trump it would take a year.
That's like saying everyone on the team could only write less than one line a day.
Plenty of civil servants knowingly engaged in misinforming Trump in his first term.
The reason America appears overrun with "refugees" is because of a loophole
Under Obama, asylees became able to say "asylum" or "credible fear" to immigration officers to start going through the asylum review process
Obama and Biden chose not to detain them during that process
The problem has reached incredible proportions under Biden because of social media.
People are now aware of the "credible fear" standard because of posts on TikTok that explain exactly how to exploit the loophole.
And it is easy. It's often as simple as saying the right words.
Obama's loosening of the standards resulted in a deterioration of America's illegal immigration experience because it became incredibly simple to exploit.
The First Law of Behavioral Genetics holds that all things are somewhat heritable, but a new adoption study suggests some exceptions, and they're a doozy.
You know what's not heritable? Belief in Genetic Determinism.
This is a funny result at first blush, but I'm not so sure what to think of it.
The authors suggested that their measures were reliable, and so the limited systematic within- and between-family variance wasn't due to unreliability, but I'm not so sure.
The reliability measures were not test-retest reliability, and test-retest and internal reliability measures do not necessarily agree.
As an example, the U.K. Biobank's cognitive test has a moderate-to-high internal reliability and a low test-retest reliability.
Can you improve student outcomes by promoting growth mindsets?
Authors with financial conflicts of interest—for example, with growth mindset books or offering corporate trainings—publish studies that say 'YES!'
Authors without financial incentives to say 'yes'... they say 'no'.
Now here's a twist:
In unpublished studies done by financially conflicted and non-conflicted authors, the effect sizes aren't distinguishable and they're consistently minor.
The financially conflicted are aware of the fact that growth mindset doesn't work, they just lie.
They lie by omission, to be clear, and this is definitely their fault, not the fault of journal editors who won't publish nulls.
Why? Because these authors would speak up against their financial interest if they were honest.
In the distant past of the 1970s, audit studies—where you send in fake applications to check differences in callback rates—used to show evidence of a preference for males in different jobs (OR > 1).
Not so much anymore🧵
Each point on that plot is the result of a different study. This is a large meta-analysis of audit studies, with a lot of different effect sizes to choose from.
For example, we can look across jobs that are masculine, feminine, or not sex-typed and we get different results:
In the gender-balanced and male-typed jobs, bias is small, but in jobs that are feminine, it seems like women are preferred to men.
Comparing these coefficients over time, the bias in favor of males never really was significant, and now what remains is a pro-female bias: