A thread summarising my talk at #rED23 yesterday on the challenges of applying the science of learning in the classroom 🧵
As far back as the 1890s William James cautioned against thinking you can apply the principles of psychology straight into the classroom. However, without an understanding of how the brain learns, planning instruction is suboptimal. I think these two positions encapsulate the interstitial point in which we find ourselves.
What might we mean by an applied science of learning? Here Frederick Reif provides a useful set of principles to consider. (I don’t think we’re anywhere near point 3)
What should an applied science of learning aim to do? It should not only aim to discover how learning happens but more importantly, how to actually use it in the classroom. Donald Stokes notion of Pasteur’s quadrant is a useful way to think about this.
While there may be such thing as a science of learning, we can’t really say there’s such thing as a science of teaching. (Although Mayer would argue there is such thing as a science of instruction.)
Some of the foundational beliefs about how learning happens are not supported by cognitive science and have paved the way for bad ideas in the classroom.
Here are some examples of those bad ideas applied in the classroom courtesy of the brilliant @stoneman_claire’s diabolical time capsule of pedagogical novichok x.com/stoneman_clair…
These activities are iatrogenic in effect. In others words, the cure is worse than the disease.
What are some examples of overarching principles of how learning happens? Here I offer some to consider when designing classroom instruction based on cognitive science:
A big challenge is creating a shared understanding of how learning happens. For whatever reason, models of learning based on cognitive science don’t appear to have been a part of many teacher training courses in the past.
Many pseudoscientific beliefs about learning have persisted in the profession. Various studies have shown that as many as 9 out of 10 teachers believe kids learn effectively when content is matched to their learning styles.
A vital challenge now is to create a shared understanding of how learning happens.
As the Perry review (2021) showed, despite a very strong body of evidence from lab settings, a lot of the evidence on cognitive science in practice is not from ecologically valid (realistic) settings.
Thinking more closely about a specific example of applying evidence: Instead of mandating retrieval practice every lesson, subject leaders should be considering implementation in a domain/stage specific way. Among the questions we can ask are:
Applying the science of learning needs careful consideration lest it become a lethal mutation. It shouldn’t be a new form of prescription, robbing teachers of professional agency.
An analogy: It’s not so much paining by numbers as pointillism where instead of simplistic broad brush approach, teachers make much more refined decisions moment to moment based on a sound knowledge of how learning happens.
Frederick Reif has been asking this question for over 50 years. There is now an ethical imperative for every teacher to have a sound knowledge of how learning happens.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Vygotsky's 'Zone of Proximal Development' is perhaps the most misunderstood idea in education. It was never a teaching method but a metaphor for how teaching can pull thinking upward, from the everyday to the scientific. ⬇️ 🧵
There are, broadly speaking, two Vygotskys.
The Anglo-American Vygotsky is social, collaborative, constructivist. Born in Mind in Society (1978), he became the patron saint of progressive education and appears alongside Bruner, Piaget, Rogoff, and Wertsch in teacher education courses.
His classroom privileges dialogue, peer tutoring, and scaffolding. He advocates discovery learning, group work, and authentic tasks. The teacher steps back.
Really interesting new paper on using 'contrasting erroneous example' as a means of preventing common misconceptions.
The worked example effect shows that novices benefit from step-by-step clarity, while this paper suggests that once some foundations are in place, contrasting erroneous examples can push learning further by clarifying boundaries.
Again I'm reminded of Theory of Instruction here and the idea that we learn what something is by contrasting it with what it isn't.
The crucial thing here seems to be prompts and the specific operators they use (explain, reflect, describe) which determine whether students engage in generative learning or mere recognition.
Not all wrong answers are equal. I used to think students just needed the right information to fix misconceptions but then I read the work of Michelene Chi🧵⬇️
Chi’s research revealed that misconceptions are not just small knowledge deficits; they are often coherent yet incorrect frameworks of understanding.
Put simply, a student’s wrong answer can stem from a well-formed but fundamentally flawed theory about how something works, rather than from a simple factual mistake.
So a student’s wrong answer might be the right answer according to their internal model. That’s the problem.
What is the effect of giving children smartphones before the age of 13? It's bad. Strongly associated with poorer mental health and wellbeing. BUT the evidence is largely correlational. What does this mean? 🧵⬇️
A new global study of over 100,000 young adults found that receiving a smartphone before age 13 is associated with significantly poorer mental health outcomes in early adulthood, particularly increased suicidal thoughts and diminished emotional regulation, with effects primarily mediated through early social media access.
The research demonstrates a clear dose-response relationship: the younger children are when they receive smartphones, the worse their mental health outcomes as young adults. Females who received smartphones at ages 5-6 showing 20 percentage points higher rates of suicidal ideation compared to those who received them at 13.
For whatever reason, the idea of knowing stuff has become unfashionable. We’ve absorbed the idea that facts are “mere” details, that skills and dispositions matter more, and that technology makes memory unnecessary.
But knowledge isn’t obsolete, it’s the precondition for reasoning, creativity, and insight. Skills divorced from knowledge are empty performances.
Expertise isn't about having more working memory, it's about needing less of it. Experts automate many components in long-term memory and can recognise meaningful patterns instantly, bypassing the need to process individual elements. ⬇️ 🧵
For example, the multiplication tables aren't memorised for their own sake, but because automated arithmetic facts free working memory for algebraic reasoning.
Phonics isn't taught to create little robots, but because automated letter-sound correspondences liberate the cognitive resources necessary for comprehension and analysis.