A thread summarising my talk at #rED23 yesterday on the challenges of applying the science of learning in the classroom 🧵
As far back as the 1890s William James cautioned against thinking you can apply the principles of psychology straight into the classroom. However, without an understanding of how the brain learns, planning instruction is suboptimal. I think these two positions encapsulate the interstitial point in which we find ourselves.
What might we mean by an applied science of learning? Here Frederick Reif provides a useful set of principles to consider. (I don’t think we’re anywhere near point 3)
What should an applied science of learning aim to do? It should not only aim to discover how learning happens but more importantly, how to actually use it in the classroom. Donald Stokes notion of Pasteur’s quadrant is a useful way to think about this.
While there may be such thing as a science of learning, we can’t really say there’s such thing as a science of teaching. (Although Mayer would argue there is such thing as a science of instruction.)
Some of the foundational beliefs about how learning happens are not supported by cognitive science and have paved the way for bad ideas in the classroom.
Here are some examples of those bad ideas applied in the classroom courtesy of the brilliant @stoneman_claire’s diabolical time capsule of pedagogical novichok x.com/stoneman_clair…
These activities are iatrogenic in effect. In others words, the cure is worse than the disease.
What are some examples of overarching principles of how learning happens? Here I offer some to consider when designing classroom instruction based on cognitive science:
A big challenge is creating a shared understanding of how learning happens. For whatever reason, models of learning based on cognitive science don’t appear to have been a part of many teacher training courses in the past.
Many pseudoscientific beliefs about learning have persisted in the profession. Various studies have shown that as many as 9 out of 10 teachers believe kids learn effectively when content is matched to their learning styles.
A vital challenge now is to create a shared understanding of how learning happens.
As the Perry review (2021) showed, despite a very strong body of evidence from lab settings, a lot of the evidence on cognitive science in practice is not from ecologically valid (realistic) settings.
Thinking more closely about a specific example of applying evidence: Instead of mandating retrieval practice every lesson, subject leaders should be considering implementation in a domain/stage specific way. Among the questions we can ask are:
Applying the science of learning needs careful consideration lest it become a lethal mutation. It shouldn’t be a new form of prescription, robbing teachers of professional agency.
An analogy: It’s not so much paining by numbers as pointillism where instead of simplistic broad brush approach, teachers make much more refined decisions moment to moment based on a sound knowledge of how learning happens.
Frederick Reif has been asking this question for over 50 years. There is now an ethical imperative for every teacher to have a sound knowledge of how learning happens.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
What's the "sweet spot" for spacing out practice? for students scoring below 35% they likely need more instruction or support first, while students scoring above 75% probably won't gain much from spacing out their practice. onlinelibrary.wiley.com/doi/pdf/10.100…
Specific evidence for this claim: "In Barzagar Nazari and Ebersbach's (2019a) study, the advantage of distributed practice occurred only for students scoring 3–7 out of 9.5 points, that is, 32%–74% on the first practice set. In Ebersbach and Barzagar Nazari's (2020a, Exp. 2) study, the advantage of distributed practice on transfer performance occurred only for students scoring >3.5 out of 9 points, that is, >39% on the first practice set." (p.12)
The most interesting thing about this to me is that spaced practice probably won't have much impact on students who have scored 75% or more, since they've already mastered the material which really underlines the importance of assessment for learning. In DI this is called 'placement' or mastery testing. Basically you need to know where students are at to make effective decisions about instructional strategies:
"mathematics textbook authors, teachers, and students are encouraged to adopt this practice strategy also with complex materials taking initial practice performance into account." (p.12)
Students learn faster when they see what something is and what it isn’t. One of the most important aspect of curriculum planning + instructional design is effectively using examples and non-examples. 🧵⬇️
Been re-reading Theory of Instruction through a lens of what we know about learning from the last 50 years and really realising the brilliance of it in terms of how it incorporates so much of how learning happens. It's not an easy read but for me the core concept in it is "faultless communication": the idea that teaching should be designed so precisely that misunderstanding is impossible.
One of the mad things about this aspect of DI is that it's almost a unified theory of learning in the sense that so many theorists from a range of different traditions have advocated for it in one form or another. Socrates, Aristotle, Vygotsky, Bruner, Skinner, Ausubel, Sweller all basically say the same thing on this which is that learning is driven by clear distinctions—knowing what something is requires knowing what it isn’t. It's really about concept refinement but done in an incredibly detailed way.
How do we learn? Just read this new pre-print from the great Slava Kalyuga which provides a slightly updated model of how we acquire knowledge through the conduit of an "explicit intention to learn". 🧵⬇️
As you'd expect from Kalyuga, he argues that instructional strategies should incorporate CLT (cognitive load theory) to support biologically secondary learning (stuff kids learn in school), emphasizing worked examples, guided discovery, gradual reduction of scaffolds etc. but what's interesting is the idea that the "explicit intention to learn" is the driver behind humanity's cultural evolution and that intelligence is an emergent property of this.
He outlines 6 key principles of human cognitive architecture which are oldies and goldies (except for the last one):
1. Information Store Principle: Emphasizes long-term memory as a repository of structured knowledge patterns, distinguishing between biologically primary (intuitive) and secondary (explicit, learned) information.
Primary information is innate or rapidly acquired (e.g., language), while secondary requires effortful, intentional learning (e.g., reading, scientific knowledge).
2. Borrowing and Reorganizing Principle: Highlights that knowledge is largely acquired from external sources, such as social interactions or written materials, and reorganized to fit existing cognitive structures.
This principle underpins knowledge transfer and cultural accumulation.
3. Randomness as Genesis Principle: Explains the role of random trial-and-error or problem-solving heuristics in acquiring new information. So in humans, it interacts with intentional learning to refine random processes into structured problem-solving strategies.
4. Narrow Limits of Change Principle: Suggests that working memory constraints limit the extent of modifications to knowledge, protecting existing structures from disruption. This constraint ensures stability while allowing incremental learning.
5. Environmental Organizing and Linking Principle*: Discusses how familiar environments relax cognitive constraints, enabling more efficient information processing through chunking and automation.
*This is a lesser-known aspect of CLT but an important one because it highlights how context and prior knowledge can drastically influence cognitive efficiency and learning outcomes. (it also explains why Liverpool are better at Anfield than away)
6. Explicit Intention to Learn Principle: This is new and represents the conduit theough which humans as intelligent systems are capable of conscious, effortful learning. This intention drives the shift from implicit to explicit processes, enabling cultural and technological evolution.
New paper challenging Cognitive Load Theory. I've been hoping to read a good criticism of CLT for some time but unfortunately this is not it. THREAD ⬇️🧵 tandfonline.com/doi/full/10.10…
The paper basically argues that CLT is an outdated framework, rooted in 1980s cognitive psychology, and needs to be replaced by a richer, more holistic view of the brain and learning. Fair enough, let's see what they have to say... (Although I don't think the argument that just because something is old, it is 'outdated'. Indeed, the authors offer Darwin's theory of evolution as analogous to challenges to existing orthodoxies.)
The authors ultimately advocate for a "new" approach to understanding learning, grounded in modern neuroscience and philosophy (ok...this sounds interesting) The main claims are that: 1. Learning is emergent, self-organizing, and not strictly linear. 2. The brain actively predicts and processes information, rather than reacting passively. 3. Emotional salience and attention play a key role in memory formation and learning.
So I think the 3rd point is sort of fair and worth exploring but the first two are not ones contradicted by Sweller, or at least not that I recognise.
Difficulties are not always 'desirable'. New review gives new insights into how to apply this idea with retrieval practice and how to avoid lethal mutations. 🧵⬇️
Essentially this paper advocates for a subtle but important distinction: instead of designing tasks based on the content or a static judgement of the learner, we should design tasks of dynamic difficulty based on the learner's relative expertise and the complexity of the material.
Retrieval practice is not neutral, there's a broad spectrum. For example, there's a big difference between retrieving something and merely recognising something but the difference seems to be during the learning not the assessment of that learning. So for example...
▶️Cued Recall: You are given a hint or prompt to help you remember something. Eg: "What’s the capital of France? Hint: It starts with 'P'."
▶️Free Recall: You have to remember on your own without hints. Eg: "Name all the capitals you know."
Both cued and free recall tasks require more effort than recognition tasks (like multiple-choice questions where you just pick the correct answer) but it's this extra effort during learning which strengthens memory, even if the final test is easier (like a recognition test).
What this means is that the hard work of retrieving information during learning (using cues or no cues) makes it stick better in your memory, no matter how the final test is formatted. SO.... retrieval effort is what counts most but the kicker is that it needs to be a particular kind of effort.
Why does the brain matter for education? New edition of BJEP has four papers which are very interesting. Made some notes, here's a quick 🧵⬇️
“The particular way that the human cognitive system works and the way that humans learn is due to the way their brains work. The way their brains work is due to biology. And our biology works the way it does because of evolution." Ok fair enough, nice initial rebuttal to the 'brain-as-computer' fallacy...
"The brain matters because teachers need to know how human cognitive systems work because of the foibles of biology. Indeed, if one views the mind as a form of information processing device, from the perspective of computer science, there are properties of learning in humans that seem strange until biology is considered” - ⬅️ I hear this a lot and it's an important point. The brain does not work like a computer despite the fact that cog sci uses similar terminology but we are talking about models here and 'processing' and 'storage' are appropriate words to use for what actually happens.