What do the Washington Post, Brookings, The Atlantic, and Business Insider have in common?
They all employ credulous writers who don't read about the things they write about.
The issue? Attacks on laptop-based notetaking🧵
Each of these outlets (among many others, unfortunately) reported on a a 2014 study by Mueller and Oppenheimer, in which it was reported that laptop-based note-taking was inferior to longhand note-taking for remembering content.
The evidence for this should not have been considered convincing.
In the first study, a sample of 67 students was randomized to watch and take notes on different TED talks and then they were assessed on factual or open-ended questions. The result? Worse open-ended performance:
The laptop-based note-takers didn't do worse when it came to factual content, but they did so worse when it came to the open-ended questions.
The degree to which they did worse should have been the first red flag: d = 0.34, p = 0.046.
The other red flag should have been that there was no significant interaction between the mean difference and the factual and conceptual condition (p ≈ 0.25). Strangely, that went unnoted, but I will return to it.
The authors sought to explain why there wasn't a difference in factual knowledge about the TED talks while there was one in ability to describe stuff about it/to provide open-ended, more subjective answers.
Simple: Laptops encouraged verbatim, not creative note-taking.
Before going on to study 2: Do note that all of these bars lack 95% CIs. They show standard errors, so approximately double them in your head if you're trying to figure out which differences are significant.
OK, so the second study added an intervention.
The intervention asked people using laptops to try to not take notes verbatim. This intervention totally failed with a stunningly high p-value as a result:
In terms of performance, there was once again nothing to see for factual recall. But, the authors decided to interpret a significant difference between the laptop-nonintervention participants and longhand participants in the open-ended questions as being meaningful.
But it wasn't, and the authors should have known it! Throughout this paper, they repeatedly bring up interaction tests, and they know that the interaction by the intervention did nothing, so they shouldn't have taken it. They should have affirmed no significant difference!
The fact that the authors knew to test for interactions and didn't was put on brilliant display in study 3, where they did a different intervention in which people were asked to study or not study their notes before testing at a follow-up.
Visual results:
This section is like someone took a shotgun to the paper and the buckshot was p-values in the dubious, marginal range, like a main effect with a p-value of 0.047, a study interaction of p = 0.021, and so on
It's just a mess and there's no way this should be believed. Too hacked!
And yet, this got plenty of reporting.
So the idea is out there, it's widely reported on. Lots of people start saying you should take notes by hand, not with a laptop.
But the replications start rolling in and it turns out something is wrong.
In a replication of Mueller and Oppenheimer's first study with a sample that was about twice as large, Urry et al. failed to replicate the key performance-related results.
Verbatim note copying and longer notes with laptops? Both confirmed. The rest? No.
So then Urry et al. did a meta-analysis. This was very interesting, because apparently they found that Mueller and Oppenheimer had used incorrect CIs and their results were actually nonsignificant for both types of performance.
Oh and the rest of the lit was too:
Meta-analytically, using a laptop definitely led to higher word counts in notes and more verbatim note-taking, but the performance results just weren't there.
The closest thing we get in the meta-analysis to performance going up is that maybe conceptual performance went up a tiny bit (nonsignificant, to be clear), but who even knows if that assessment's fair
That's important, since essays and open-ended questions are frequently biased
So, ditch the laptop to take notes by hand?
I wouldn't say to do that just yet.
But definitely ditch the journalists who don't tell you how dubious the studies they're reporting on actually are.
There's a popular belief that family wealth is gone in three generations.
The first earns it, the second stewards it, and the third spends it away: from shirtsleeves to shirtsleeves in three generations!
But how true is this belief?
Gregory Clark has new evidence🧵
The first thing to note is that family wealth is correlated across many generations. For example, in medieval England, this is how wealth at death correlates across six generations.
It correlates substantially enough to persist for twelve generations at observed rates of decay:
But why?
The dominant theory among laypeople is social: that the wealth is directly transmitted.
This is testable, and the Malthusian era provides us with lots of data for testing.
The Catholic Church helped to modernize the West due to its ban on cousin marriage and its disdain for adoption, but also by way of its opposition to polygyny.
The origin of this disdain arguably lies with Church Fathers like Justin Martyr, Irenaeus, and Tertullian🧵
Justin Martyr, in his Dialogue with Trypho argues with a Jew that Christians are the ones living in continuity with God's true intentions.
Justin sees Genesis 2 ("the two shall become one flesh") as normative.
In his apologetic world, Christians are supposed to transcend lust.
Irenaeus, in Against Heresies, is attacking Gnostics (Basilides, Carpocrates), whose sexual practices he finds scandalous.
To him, "temperance dwells, self-restraint is practiced, monogamy is observed"—polygyny is a doctrinal and moral deviation from creation affirmation.
The effects of charter schools on student test scores are meta-analytically estimated to be small.
In this study, the largest estimated effect was estimated to be equivalent to ~1.35 IQ points, for mathematics scores, which consistently showed larger effects than reading scores.
Similarly, the estimated effect of parents' preferred schools and of elite public secondary schools on test scores is around zero.
More interestingly, it seems charter school openings lead to competition that marginally boosts non-charter student performance and reduces absenteeism by very small degrees:
This analysis has several advantages compared to earlier ones.
The most obvious is the whole-genome data combined with a large sample size. All earlier whole-genome heritability estimates have been made using smaller samples, and thus had far greater uncertainty.
The next big thing is that the SNP and pedigree heritability estimates came from the same sample.
This can matter a lot.
If one sample has a heritability of 0.5 for a trait and another has a heritability of 0.4, it'd be a mistake to chalk the difference up to the method.