Crémieux Profile picture
Aug 24, 2024 19 tweets 6 min read Read on X
What do the Washington Post, Brookings, The Atlantic, and Business Insider have in common?

They all employ credulous writers who don't read about the things they write about.

The issue? Attacks on laptop-based notetaking🧵


Image
Image
Image
Image
Each of these outlets (among many others, unfortunately) reported on a a 2014 study by Mueller and Oppenheimer, in which it was reported that laptop-based note-taking was inferior to longhand note-taking for remembering content. Image
The evidence for this should not have been considered convincing.

In the first study, a sample of 67 students was randomized to watch and take notes on different TED talks and then they were assessed on factual or open-ended questions. The result? Worse open-ended performance: Image
The laptop-based note-takers didn't do worse when it came to factual content, but they did so worse when it came to the open-ended questions.

The degree to which they did worse should have been the first red flag: d = 0.34, p = 0.046.
The other red flag should have been that there was no significant interaction between the mean difference and the factual and conceptual condition (p ≈ 0.25). Strangely, that went unnoted, but I will return to it.
The authors sought to explain why there wasn't a difference in factual knowledge about the TED talks while there was one in ability to describe stuff about it/to provide open-ended, more subjective answers.

Simple: Laptops encouraged verbatim, not creative note-taking. Image
Before going on to study 2: Do note that all of these bars lack 95% CIs. They show standard errors, so approximately double them in your head if you're trying to figure out which differences are significant.

OK, so the second study added an intervention.
The intervention asked people using laptops to try to not take notes verbatim. This intervention totally failed with a stunningly high p-value as a result:Image
In terms of performance, there was once again nothing to see for factual recall. But, the authors decided to interpret a significant difference between the laptop-nonintervention participants and longhand participants in the open-ended questions as being meaningful. Image
But it wasn't, and the authors should have known it! Throughout this paper, they repeatedly bring up interaction tests, and they know that the interaction by the intervention did nothing, so they shouldn't have taken it. They should have affirmed no significant difference!
The fact that the authors knew to test for interactions and didn't was put on brilliant display in study 3, where they did a different intervention in which people were asked to study or not study their notes before testing at a follow-up.

Visual results: Image
This section is like someone took a shotgun to the paper and the buckshot was p-values in the dubious, marginal range, like a main effect with a p-value of 0.047, a study interaction of p = 0.021, and so on

It's just a mess and there's no way this should be believed. Too hacked!
And yet, this got plenty of reporting.

So the idea is out there, it's widely reported on. Lots of people start saying you should take notes by hand, not with a laptop.

But the replications start rolling in and it turns out something is wrong.
In a replication of Mueller and Oppenheimer's first study with a sample that was about twice as large, Urry et al. failed to replicate the key performance-related results.

Verbatim note copying and longer notes with laptops? Both confirmed. The rest? No. Image
So then Urry et al. did a meta-analysis. This was very interesting, because apparently they found that Mueller and Oppenheimer had used incorrect CIs and their results were actually nonsignificant for both types of performance.

Oh and the rest of the lit was too: Image
Meta-analytically, using a laptop definitely led to higher word counts in notes and more verbatim note-taking, but the performance results just weren't there. Image
The closest thing we get in the meta-analysis to performance going up is that maybe conceptual performance went up a tiny bit (nonsignificant, to be clear), but who even knows if that assessment's fair

That's important, since essays and open-ended questions are frequently biased
So, ditch the laptop to take notes by hand?

I wouldn't say to do that just yet.

But definitely ditch the journalists who don't tell you how dubious the studies they're reporting on actually are.
Sources:





Postscript: A study with missing condition Ns, improperly-charted SEs, and the result that laptop notes are worse only for laptop-based test-taking but not taking tests by hand. Probably nothing: journals.sagepub.com/doi/10.1177/09…
journals.sagepub.com/doi/full/10.11…
journals.sagepub.com/doi/10.1177/00…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Crémieux

Crémieux Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @cremieuxrecueil

Oct 14
Where did that human capital go?

After the Counter-Reformation began, Protestant Germany started producing more elites than Catholic Germany.

Protestant cities also attracted more of these elite individuals, but primarily to the places with the most progressive governments🧵Image
Q: What am I talking about?

A: Kirchenordnung, or Church Orders, otherwise known as Protestant Church Ordinances, a sort of governmental compact that started cropping up after the Reformation, in Protestant cities. Image
Q: Why these things?

A: Protestants wanted to establish political institutions in their domains that replaced those previously provided by the Catholics, or which otherwise departed from how things were done. Image
Read 12 tweets
Oct 7
What predicts a successful educational intervention?

Unfortunately, the answer is not 'methodological propriety'; in fact, it's the opposite🧵

First up: home-made measures, a lack of randomization, and a study being published instead of unpublished predict larger effects. Image
It is *far* easier to cook the books with an in-house measure, and it's far harder for other researchers to evaluate what's going on because they definitionally cannot be familiar with it.

Additionally, smaller studies tend to have larger effects—a hallmark of publication bias! Image
Education, like many fields, clearly has a bias towards significant results.

Notice the extreme excess of results with p-values that are 'just significant'.

The pattern we see above should make you suspect if you realize this is happening. Image
Read 10 tweets
Oct 6
Across five different large samples, the same pattern emerged:

Trans people tended to have multiple times higher rates of autism. Image
In addition to higher autism rates, when looking at non-autistic trans versus non-trans people, the trans people were consistently shifted towards showing more autistic traits. Image
In two of the available datasets, the autism result replicated across other psychiatric traits.

That is, trans people were also at an elevated risk of ADHD, bipolar disorder, depression, OCD, and schizophrenia, before and after making various adjustments. Image
Read 6 tweets
Oct 6
Across 68,000 meta-analyses including over 700,000 effect size estimates, correcting for publication bias tended to:

- Markedly reduce effect sizes
- Markedly reduce the probability that there is an effect at all

Economics hardest hit: Image
Even this is perhaps too generous.

Recall that correcting for publication bias often produces effects that are still larger than the effects attained in subsequent large-scale replication studies.Image
A great example of this comes from priming studies.

Remember money priming, where simply seeing or handling money made people more selfish and better at business?

Those studies were stricken by publication bias, but preregistered studies totally failed to find a thing. Image
Read 6 tweets
Oct 5
Neat new article from @Scientific_Bird.

It argues that one of the reasons there was an East Asian growth miracle but not a South Asian one is human capital.

For centuries, South Asia has lagged on average human capital, whereas East Asia has done very well in all our records. Image
It's unsurprising when these things continue today.

We already know based on three separate instrumental variables strategies using quite old datapoints that human capital is causal for growth. That includes these numeracy measures from the distant past.

Image
This makes a lot of sense, too.

Where foreign visitors centuries ago thought China was remarkably equal and literate (both true!), they noticed that India had an elite upper crust accompanied by intense squalor.

Image
Read 4 tweets
Oct 4
The results are in and 58.3% of the almost 7,500 responses said...

Men tend to get more steps in a day!

Sources say...

Yes. Wherever we have large-scale, representatively sampled data, men tend to get a few more steps in compared to women. Image
Step counts tend to vary by area.

For example, New York—thanks to New York City—has the highest average step count.

Colorado—due in part to its selected active, athletic population—also manages a high step count.

You'll also notice that moderate temps mean more steps. Image
What's up with these location effects? Are they causal?

As it turns out, the answer is partially in the affirmative.

That is, some places actually encourage people to walk more!

How do we know? Simple.
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(