The American nuclear industry illustrates negative learning: the costs of plants have increased over time.
But this is not nuclear's fault. Almost everywhere else, the learning rate is positive: costs decline as the industry gains experience building!
🧵
Consider France:
The U.S. has really only been experiencing cost overruns since the Three Mile Island incident, and the reason has to do with the industry becoming overregulated as a result of the public outcry that ensued.
In general, nuclear cost overruns are driven by indirect costs, like having to hire more safety professionals due to added regulatory burdens.
Those explain 72% of the price hike in the U.S., 1976-87:
In a more recent OECD report on nuclear from 2020, it was noted that "indirect cost[s] are the main driver of these cost overruns" and 80% of those indirect costs are attributable to largely unnecessary labor.
The regulatory costs levied against nuclear are so extreme that they can make components cost 50 times what they should, like in the case of 75 mm stainless steel gate valves.
The main factor differentiating nuclear and industrial grade? Unnecessary quality certification.
The question is less "Why is nuclear expensive?" and more "Why is nuclear overregulated?"
And the reason isn't clear-cut. It's obvious it's not so simple as saying "ALARA!", since many countries manage positive learning despite sticking to the same philosophy.
It's more likely a combination of factors involving activism
Thanks to activism, the U.S. nuclear fleet won't achieve French emission levels because, under the Carter administration, activists managed to get reprocessing banned, tarring nuclear's reputation via the 'waste' issue
In any case, nuclear remains a viable option for cleanly powering the future, and continued research into it is necessary for taking us into the stars.
Moreover, for consumers, it remains beneficial ($!) so long as intermittent forms of generation are, well, intermittent.
There's more that can be said, but I'll cut it off there
Sources:
To read way more on this, check out this IFP piece:
After the Counter-Reformation began, Protestant Germany started producing more elites than Catholic Germany.
Protestant cities also attracted more of these elite individuals, but primarily to the places with the most progressive governments🧵
Q: What am I talking about?
A: Kirchenordnung, or Church Orders, otherwise known as Protestant Church Ordinances, a sort of governmental compact that started cropping up after the Reformation, in Protestant cities.
Q: Why these things?
A: Protestants wanted to establish political institutions in their domains that replaced those previously provided by the Catholics, or which otherwise departed from how things were done.
What predicts a successful educational intervention?
Unfortunately, the answer is not 'methodological propriety'; in fact, it's the opposite🧵
First up: home-made measures, a lack of randomization, and a study being published instead of unpublished predict larger effects.
It is *far* easier to cook the books with an in-house measure, and it's far harder for other researchers to evaluate what's going on because they definitionally cannot be familiar with it.
Additionally, smaller studies tend to have larger effects—a hallmark of publication bias!
Education, like many fields, clearly has a bias towards significant results.
Notice the extreme excess of results with p-values that are 'just significant'.
The pattern we see above should make you suspect if you realize this is happening.
Across five different large samples, the same pattern emerged:
Trans people tended to have multiple times higher rates of autism.
In addition to higher autism rates, when looking at non-autistic trans versus non-trans people, the trans people were consistently shifted towards showing more autistic traits.
In two of the available datasets, the autism result replicated across other psychiatric traits.
That is, trans people were also at an elevated risk of ADHD, bipolar disorder, depression, OCD, and schizophrenia, before and after making various adjustments.
Across 68,000 meta-analyses including over 700,000 effect size estimates, correcting for publication bias tended to:
- Markedly reduce effect sizes
- Markedly reduce the probability that there is an effect at all
Economics hardest hit:
Even this is perhaps too generous.
Recall that correcting for publication bias often produces effects that are still larger than the effects attained in subsequent large-scale replication studies.
A great example of this comes from priming studies.
Remember money priming, where simply seeing or handling money made people more selfish and better at business?
Those studies were stricken by publication bias, but preregistered studies totally failed to find a thing.
It argues that one of the reasons there was an East Asian growth miracle but not a South Asian one is human capital.
For centuries, South Asia has lagged on average human capital, whereas East Asia has done very well in all our records.
It's unsurprising when these things continue today.
We already know based on three separate instrumental variables strategies using quite old datapoints that human capital is causal for growth. That includes these numeracy measures from the distant past.
Where foreign visitors centuries ago thought China was remarkably equal and literate (both true!), they noticed that India had an elite upper crust accompanied by intense squalor.