The FBI has finally released crime statistics for 2023!
Let's have a short thread.
First thing up is recent violent crime trends:
Now let's focus in on homicides.
The homicide statistics split by race show the same distribution they have for years.
As with every crime, it's still men doing the killing, but it's also largely men doing the dying.
What about Hispanics? Their data is still a mess, but here it is if you're interested.
The age-crime curve last year looked pretty typical. How about this year?
Same as always. Victims and offenders still have highly similar, relatively young ages.
Everything else, from locations to motives to weapons is pretty similar to previous years. What's different is that the OP might show incorrect numbers.
For the past two years, the FBI has silently updated their numbers after about two weeks.
You can use the web archive to see that the data from the OP is the data shown at release last year, and the data from 2023 is the 2022 data with the FBI's suggested reductions (i.e., -11.6% homicides, -2.8% aggravated assaults, -0.3% robberies, etc.).
But you can see on their site now that they've adjusted the numbers up, so the reduction they suggested has brought us down to a figure that's less impressive than my chart shows. The difference isn't huge so I showed the OP without updating to their new data.
For reference, 2022 as reported then had a homicide rate of 6.3/100k, and they silently updated that to 7.48/100k. The 2023 data they provided today actually has a murder rate of 6.61/100k, higher than last year's initially-reported number, but lower than the updated number. To make matters worse, if you use their Expanded Homicides Report, you get a rate of 5.94 for 2022 and 5.24 for 2023.
Methodology matters and we get to see inconsistency in this year's data, not even data that's been updated or anything. It's a mess, so take everything with a grain of salt and, in the interest of caution, only interpret trends. Trends are mostly common between all data sources even if the absolute magnitudes are off, constantly updated, etc.
After the Counter-Reformation began, Protestant Germany started producing more elites than Catholic Germany.
Protestant cities also attracted more of these elite individuals, but primarily to the places with the most progressive governments🧵
Q: What am I talking about?
A: Kirchenordnung, or Church Orders, otherwise known as Protestant Church Ordinances, a sort of governmental compact that started cropping up after the Reformation, in Protestant cities.
Q: Why these things?
A: Protestants wanted to establish political institutions in their domains that replaced those previously provided by the Catholics, or which otherwise departed from how things were done.
What predicts a successful educational intervention?
Unfortunately, the answer is not 'methodological propriety'; in fact, it's the opposite🧵
First up: home-made measures, a lack of randomization, and a study being published instead of unpublished predict larger effects.
It is *far* easier to cook the books with an in-house measure, and it's far harder for other researchers to evaluate what's going on because they definitionally cannot be familiar with it.
Additionally, smaller studies tend to have larger effects—a hallmark of publication bias!
Education, like many fields, clearly has a bias towards significant results.
Notice the extreme excess of results with p-values that are 'just significant'.
The pattern we see above should make you suspect if you realize this is happening.
Across five different large samples, the same pattern emerged:
Trans people tended to have multiple times higher rates of autism.
In addition to higher autism rates, when looking at non-autistic trans versus non-trans people, the trans people were consistently shifted towards showing more autistic traits.
In two of the available datasets, the autism result replicated across other psychiatric traits.
That is, trans people were also at an elevated risk of ADHD, bipolar disorder, depression, OCD, and schizophrenia, before and after making various adjustments.
Across 68,000 meta-analyses including over 700,000 effect size estimates, correcting for publication bias tended to:
- Markedly reduce effect sizes
- Markedly reduce the probability that there is an effect at all
Economics hardest hit:
Even this is perhaps too generous.
Recall that correcting for publication bias often produces effects that are still larger than the effects attained in subsequent large-scale replication studies.
A great example of this comes from priming studies.
Remember money priming, where simply seeing or handling money made people more selfish and better at business?
Those studies were stricken by publication bias, but preregistered studies totally failed to find a thing.
It argues that one of the reasons there was an East Asian growth miracle but not a South Asian one is human capital.
For centuries, South Asia has lagged on average human capital, whereas East Asia has done very well in all our records.
It's unsurprising when these things continue today.
We already know based on three separate instrumental variables strategies using quite old datapoints that human capital is causal for growth. That includes these numeracy measures from the distant past.
Where foreign visitors centuries ago thought China was remarkably equal and literate (both true!), they noticed that India had an elite upper crust accompanied by intense squalor.