A thread summarising my talk at #rED23 yesterday on the challenges of applying the science of learning in the classroom 🧵
As far back as the 1890s William James cautioned against thinking you can apply the principles of psychology straight into the classroom. However, without an understanding of how the brain learns, planning instruction is suboptimal. I think these two positions encapsulate the interstitial point in which we find ourselves.
What might we mean by an applied science of learning? Here Frederick Reif provides a useful set of principles to consider. (I don’t think we’re anywhere near point 3)
What should an applied science of learning aim to do? It should not only aim to discover how learning happens but more importantly, how to actually use it in the classroom. Donald Stokes notion of Pasteur’s quadrant is a useful way to think about this.
While there may be such thing as a science of learning, we can’t really say there’s such thing as a science of teaching. (Although Mayer would argue there is such thing as a science of instruction.)
Some of the foundational beliefs about how learning happens are not supported by cognitive science and have paved the way for bad ideas in the classroom.
Here are some examples of those bad ideas applied in the classroom courtesy of the brilliant @stoneman_claire’s diabolical time capsule of pedagogical novichok x.com/stoneman_clair…
These activities are iatrogenic in effect. In others words, the cure is worse than the disease.
What are some examples of overarching principles of how learning happens? Here I offer some to consider when designing classroom instruction based on cognitive science:
A big challenge is creating a shared understanding of how learning happens. For whatever reason, models of learning based on cognitive science don’t appear to have been a part of many teacher training courses in the past.
Many pseudoscientific beliefs about learning have persisted in the profession. Various studies have shown that as many as 9 out of 10 teachers believe kids learn effectively when content is matched to their learning styles.
A vital challenge now is to create a shared understanding of how learning happens.
As the Perry review (2021) showed, despite a very strong body of evidence from lab settings, a lot of the evidence on cognitive science in practice is not from ecologically valid (realistic) settings.
Thinking more closely about a specific example of applying evidence: Instead of mandating retrieval practice every lesson, subject leaders should be considering implementation in a domain/stage specific way. Among the questions we can ask are:
Applying the science of learning needs careful consideration lest it become a lethal mutation. It shouldn’t be a new form of prescription, robbing teachers of professional agency.
An analogy: It’s not so much paining by numbers as pointillism where instead of simplistic broad brush approach, teachers make much more refined decisions moment to moment based on a sound knowledge of how learning happens.
Frederick Reif has been asking this question for over 50 years. There is now an ethical imperative for every teacher to have a sound knowledge of how learning happens.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
New study: A single 10-minute retrieval practice activity significantly improved final exam performance compared to a review session. But there's a lot more to this study 🧵⬇️
The intervention was 10 minutes of students taking an unexpected, closed-notes practice test consisting of:
- 10 multiple-choice questions created by the instructor
- Questions focused on key concepts likely to appear on the final exam
- Each question had four answer choices
- Questions assessed recall or comprehension of foundational concepts
Students were told it was ungraded and framed as preparation for the final exam. Immediately after the 10-minute test, the instructor provided corrective feedback, explaining why each answer was correct or incorrect.
The passive review was a brief PowerPoint-based presentation where the instructor delivered key concepts as bullet points to the class. Specifically, the review group received:
The same content that was tested in the retrieval practice group
Information presented in bullet-point format on slides
Instructor clarification of misconceptions
A structured overview of concepts likely to appear on the final exam
This is what the study calls a "more common instructional approach"; essentially a traditional pre-exam review session where students passively receive information rather than actively retrieving it from memory.
This new paper is a great example of desirable difficulties in practice: Interleaving spelling tasks led to better performance on later spelling tests, even though it was harder during practice. 🧵⬇️
What is interleaving and how does it work? Essentially it's really about a kind of discrimination: when learners encounter different items back-to-back, they must pay attention to what distinguishes one from the next. This strengthens their ability to categorise and apply the right rule or strategy.
Interleaving stands in opposition to "blocked practice", which is when learners focus on one type of problem, skill, or concept at a time and repeating it over and over before moving on to the next.
The key thing to understand about interleaving is that it leads to poorer performance in the short term, BUT better learning in the long-term.
While blocked practice can feel easier and lead to better short-term performance, it often results in poorer long-term retention and weaker transfer because it doesn’t require learners to distinguish between different types of problems or rules.
Once again, matching teaching to learning styles has near-zero impact on student achievement. I've noticed a resurgence of the learning styles myth recently so this new study is timely. 🧵 ⬇️
9 out of 10 teachers still believe in the myth despite being thoroughly debunked by cognitive science. We've known this for 10 years. This to me is the most sobering aspect of all this and again, shows the pressing need for teachers to get proper training on how learning happens.
Even worse, the learning styles myth is still a part of teacher training in some quarters.
Why The Forgetting Curve Is Not As Useful As You Think. Ebbinghaus' research was groundbreaking for the time but it's not really how learning happens in authentic learning situations ⬇️🧵
I see a lot of training where school leaders use Ebbinghaus as a vehicle to talk about retrieval practice. While the basic premise is important, I don't think it's particularly useful for teachers because it's not really how learning happens in authentic learning situations.
The forgetting curve shows that memory loss follows an exponential pattern—we forget rapidly at first, then more slowly over time. This reinforced the idea that spaced repetition can help prevent forgetting.
New paper asks why have the same major motivation theories (self-determination theory, expectancy-value theory, achievement goal theory, etc.) dominated educational psychology for decades with little change? ⬇️ 🧵
Dominant motivation theories are valuable but underspecified. The paper acknowledges that current theories have "provided tremendous advancements in the understanding of motivation" and led to successful interventions, but argues they don't adequately explain how motivation actually works at a mechanistic level.
There is a common formula of motivation theories. Most theories follow a similar structure where "adaptive forms of motivation (e.g., need satisfaction, mastery goals, self-efficacy) predict positive outcomes," while "maladaptive forms predict negative outcomes." This makes them somewhat obvious and difficult to distinguish from each other.
What's the "sweet spot" for spacing out practice? for students scoring below 35% they likely need more instruction or support first, while students scoring above 75% probably won't gain much from spacing out their practice. onlinelibrary.wiley.com/doi/pdf/10.100…
Specific evidence for this claim: "In Barzagar Nazari and Ebersbach's (2019a) study, the advantage of distributed practice occurred only for students scoring 3–7 out of 9.5 points, that is, 32%–74% on the first practice set. In Ebersbach and Barzagar Nazari's (2020a, Exp. 2) study, the advantage of distributed practice on transfer performance occurred only for students scoring >3.5 out of 9 points, that is, >39% on the first practice set." (p.12)
The most interesting thing about this to me is that spaced practice probably won't have much impact on students who have scored 75% or more, since they've already mastered the material which really underlines the importance of assessment for learning. In DI this is called 'placement' or mastery testing. Basically you need to know where students are at to make effective decisions about instructional strategies:
"mathematics textbook authors, teachers, and students are encouraged to adopt this practice strategy also with complex materials taking initial practice performance into account." (p.12)