What do the Washington Post, Brookings, The Atlantic, and Business Insider have in common?
They all employ credulous writers who don't read about the things they write about.
The issue? Attacks on laptop-based notetaking🧵
Each of these outlets (among many others, unfortunately) reported on a a 2014 study by Mueller and Oppenheimer, in which it was reported that laptop-based note-taking was inferior to longhand note-taking for remembering content.
The evidence for this should not have been considered convincing.
In the first study, a sample of 67 students was randomized to watch and take notes on different TED talks and then they were assessed on factual or open-ended questions. The result? Worse open-ended performance:
The laptop-based note-takers didn't do worse when it came to factual content, but they did so worse when it came to the open-ended questions.
The degree to which they did worse should have been the first red flag: d = 0.34, p = 0.046.
The other red flag should have been that there was no significant interaction between the mean difference and the factual and conceptual condition (p ≈ 0.25). Strangely, that went unnoted, but I will return to it.
The authors sought to explain why there wasn't a difference in factual knowledge about the TED talks while there was one in ability to describe stuff about it/to provide open-ended, more subjective answers.
Simple: Laptops encouraged verbatim, not creative note-taking.
Before going on to study 2: Do note that all of these bars lack 95% CIs. They show standard errors, so approximately double them in your head if you're trying to figure out which differences are significant.
OK, so the second study added an intervention.
The intervention asked people using laptops to try to not take notes verbatim. This intervention totally failed with a stunningly high p-value as a result:
In terms of performance, there was once again nothing to see for factual recall. But, the authors decided to interpret a significant difference between the laptop-nonintervention participants and longhand participants in the open-ended questions as being meaningful.
But it wasn't, and the authors should have known it! Throughout this paper, they repeatedly bring up interaction tests, and they know that the interaction by the intervention did nothing, so they shouldn't have taken it. They should have affirmed no significant difference!
The fact that the authors knew to test for interactions and didn't was put on brilliant display in study 3, where they did a different intervention in which people were asked to study or not study their notes before testing at a follow-up.
Visual results:
This section is like someone took a shotgun to the paper and the buckshot was p-values in the dubious, marginal range, like a main effect with a p-value of 0.047, a study interaction of p = 0.021, and so on
It's just a mess and there's no way this should be believed. Too hacked!
And yet, this got plenty of reporting.
So the idea is out there, it's widely reported on. Lots of people start saying you should take notes by hand, not with a laptop.
But the replications start rolling in and it turns out something is wrong.
In a replication of Mueller and Oppenheimer's first study with a sample that was about twice as large, Urry et al. failed to replicate the key performance-related results.
Verbatim note copying and longer notes with laptops? Both confirmed. The rest? No.
So then Urry et al. did a meta-analysis. This was very interesting, because apparently they found that Mueller and Oppenheimer had used incorrect CIs and their results were actually nonsignificant for both types of performance.
Oh and the rest of the lit was too:
Meta-analytically, using a laptop definitely led to higher word counts in notes and more verbatim note-taking, but the performance results just weren't there.
The closest thing we get in the meta-analysis to performance going up is that maybe conceptual performance went up a tiny bit (nonsignificant, to be clear), but who even knows if that assessment's fair
That's important, since essays and open-ended questions are frequently biased
So, ditch the laptop to take notes by hand?
I wouldn't say to do that just yet.
But definitely ditch the journalists who don't tell you how dubious the studies they're reporting on actually are.
The reason this is hard to explain has to do with the fact that kids objectively have more similar environments to one another than to their parents.
In fact, for a cultural theory to recapitulate regression to the mean across generations, these things would need to differ!
Another fact that speaks against a cultural explanation is that the length of contact between fathers and sons doesn't matter for how correlated they are in status.
We can see this by leveraging the ages parents die at relative to said sons.
The internet gives everyone access to unlimited information, learning tools, and the new digital economy, so One Laptop Per Child should have major benefits.
The reality:
Another study just failed to find effects on academic performance.
This is one of those findings that's so much more damning than it at first appears.
The reason being, laptop access genuinely provides people with more information than was available to any kid at any previous generation in history.
If access was the issue, this resolves it.
And yet, nothing happens
This implementation of the program was more limited than other ones that we've already seen evaluations for though. The laptops were not Windows-based and didn't have internet, so no games, but non-infinite info too
So, at least in this propensity score- or age-matched data, there's no reason to chalk the benefit up to the weight loss effects.
This is a hint though, not definitive. Another hint is that benefits were observed in short trials, meaning likely before significant weight loss.
We can be doubly certain about that last hint because diabetics tend to lose less weight than non-diabetics, and all of the observed benefit has so far been observed in diabetic cohorts, not non-diabetic ones (though those directionally show benefits).
The reason why should teach us something about commitment
The government there has previously attempted crackdowns twice in the form of mano dura—hard hand—, but they failed because they didn't hit criminals hard enough
Then Bukele really did
In fact, previous attempts backfired compared to periods in which the government made truces with the gangs.
The government cracking down a little bit actually appeared to make gangs angrier!
You'd have been in your right to conclude 'tough on crime fails', but you'd be wrong.
You have to *actually* enforce the law or policy won't work. Same story with three-strike laws, or any other measure
Incidentally, when did the gang problems begin for El Salvador? When the U.S. exported gang members to it
Diets that restrict carbohydrate consumption lead to improved blood sugar and insulin levels, as well as reduced insulin resistance.
Additionally, they're good or neutral for the liver and kidneys, and they don't affect the metabolic rate.
Carbohydrate isn't the only thing that affects glycemic parameters.
So does fat!
So, for example, if you replace 5% of dietary calories from saturated fat with PUFA, that somewhat improves fasting glucose levels (shown), and directionally improves fasting insulin:
Dietary composition may not be useful for improving the rate of weight loss ceteris paribus, but it can definitely make it easier given what else it changes.
Those non-metabolism details may be why so many people find low-carb diets so easy!