Carl Hendrick Profile picture
Jan 11 8 tweets 4 min read Read on X
Direct or explicit instruction seems to be widely misunderstood. It's often characterised as boring lectures with little interaction and not catering to the needs of all students. Nothing could be further from the truth. A short thread 🧵⬇️
Direct Instruction (DI) as a formal method was designed by Siegfried Engelmann and Wesley Becker in the 1960s for teaching core academic skills. This was a structured, systematic approach which emphasizes carefully sequenced materials delivered in a clear, unambiguous language with examples.

It's designed to leave little room for misinterpretation and to ensure that all students, regardless of background or ability, can learn effectively.

It's also anything but boring. Here is a video from the 1960s of Englemann teaching Maths. Notice how interactive and fast paced the teaching is:
In the 1970s, Barak Rosenshine researched what makes for high quality teaching. He found that really effective teachers use direct instruction (di) as a core part of their practice and that it's about a lot more than merely explaining things ⬇️
In the 1980s, Brophy and Good looked at the relationship between teacher behaviours and student achievement. They found that explicit instruction was an integral part of effective teaching and it was in fact, a form of active teaching. They write that although there is a lot of teacher talk, most of it is "academic rather than procedural or managerial and much of it involves asking questions and giving feedback rather than extended lecturing." edwp.educ.msu.edu/research/wp-co…Image
Image
In the early 2000s, Explicit Direct Instruction (EDI) was developed by Silvia Ybarra and John Hollingsworth and despite the harsh sounding name, is very interactive.

Something which will probably shock most teachers is that Explicit Direct Instruction suggests that teachers talk for a maximum of two minutes before engaging students in some way ⬇️Image
One major misconception is the claim that "Direct or Explicit instruction marginalises SEN pupils." This is completely untrue, in fact the opposite is probably more accurate. The EEF recommended explicit instruction as a core part of their ‘Special Educational Needs in Mainstream Schools’ guidance report.Image
What is the evidence base for direct or explicit instruction?
Well there's a lot but let's take the unfortunately named Project Follow Through, (initiated in 1968 and extended right through to 1977) which was the largest and most comprehensive educational experiment ever conducted in the US. Its primary goal was to determine the most effective ways of teaching at-risk children in kindergarten through third grade.

The results indicated that Direct Instruction was the most effective across a range of measures, including basic skills, cognitive skills, and affective outcomes.Image
Two astounding things I find about Project Follow Through:

1. Not only did these students (mostly disadvantaged and at-risk) do better on what was termed 'basic skills' such as reading and maths but they also felt better about themselves.
2. Secondly, many educationalists and academics not only ignored these results but actually encouraged schools to use the least effective methods from this study. As Cathy Watkins puts it: "The majority of schools today use methods that are not unlike the Follow Through models that were least effective (and in some cases were most detrimental)."
nifdi.org/research/esp-a…Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carl Hendrick

Carl Hendrick Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @C_Hendrick

Dec 20
New paper challenging Cognitive Load Theory. I've been hoping to read a good criticism of CLT for some time but unfortunately this is not it. THREAD ⬇️🧵 tandfonline.com/doi/full/10.10…Image
The paper basically argues that CLT is an outdated framework, rooted in 1980s cognitive psychology, and needs to be replaced by a richer, more holistic view of the brain and learning. Fair enough, let's see what they have to say... (Although I don't think the argument that just because something is old, it is 'outdated'. Indeed, the authors offer Darwin's theory of evolution as analogous to challenges to existing orthodoxies.)
The authors ultimately advocate for a "new" approach to understanding learning, grounded in modern neuroscience and philosophy (ok...this sounds interesting) The main claims are that:
1. Learning is emergent, self-organizing, and not strictly linear.
2. The brain actively predicts and processes information, rather than reacting passively.
3. Emotional salience and attention play a key role in memory formation and learning.

So I think the 3rd point is sort of fair and worth exploring but the first two are not ones contradicted by Sweller, or at least not that I recognise.
Read 18 tweets
Dec 15
Difficulties are not always 'desirable'. New review gives new insights into how to apply this idea with retrieval practice and how to avoid lethal mutations. 🧵⬇️ Image
Essentially this paper advocates for a subtle but important distinction: instead of designing tasks based on the content or a static judgement of the learner, we should design tasks of dynamic difficulty based on the learner's relative expertise and the complexity of the material.
Retrieval practice is not neutral, there's a broad spectrum. For example, there's a big difference between retrieving something and merely recognising something but the difference seems to be during the learning not the assessment of that learning. So for example...
▶️Cued Recall: You are given a hint or prompt to help you remember something. Eg: "What’s the capital of France? Hint: It starts with 'P'."
▶️Free Recall: You have to remember on your own without hints. Eg: "Name all the capitals you know."
Both cued and free recall tasks require more effort than recognition tasks (like multiple-choice questions where you just pick the correct answer) but it's this extra effort during learning which strengthens memory, even if the final test is easier (like a recognition test).

What this means is that the hard work of retrieving information during learning (using cues or no cues) makes it stick better in your memory, no matter how the final test is formatted. SO.... retrieval effort is what counts most but the kicker is that it needs to be a particular kind of effort.
Read 9 tweets
Dec 7
Why does the brain matter for education? New edition of BJEP has four papers which are very interesting. Made some notes, here's a quick 🧵⬇️Image
“The particular way that the human cognitive system works and the way that humans learn is due to the way their brains work. The way their brains work is due to biology. And our biology works the way it does because of evolution." Ok fair enough, nice initial rebuttal to the 'brain-as-computer' fallacy...
"The brain matters because teachers need to know how human cognitive systems work because of the foibles of biology. Indeed, if one views the mind as a form of information processing device, from the perspective of computer science, there are properties of learning in humans that seem strange until biology is considered” - ⬅️ I hear this a lot and it's an important point. The brain does not work like a computer despite the fact that cog sci uses similar terminology but we are talking about models here and 'processing' and 'storage' are appropriate words to use for what actually happens.
Read 12 tweets
Nov 25
Really interesting new study on worked examples which underlines just how easy it is to get things wrong and end up in lethal mutation territory 🧵 Image
When used effectively, worked examples are a very efficient methods for scaffolding learning in the early stages of skill acquisition.They're typically used in teaching problem-solving skills in fields like mathematics, science, and programming.
One of the most critical yet overlooked aspects of instructional design is the selection of worked examples. This study shows that ambiguity in examples can significantly impair learning, fostering misconceptions and false confidence in students.
Read 11 tweets
Nov 16
The new Ofsted proposals being discussed at the moment are concerning, particularly the separation of curriculum and instruction. Some thoughts 🧵 Image
The particular problems that any inspectorate faces in shifting away from single word judgements are pretty well established in the literature going back 50 years but it seems little has been learned. It's clear that there needs to be a system of accountability but we are in danger of going back 15-20 years, when classroom observations were essentially tarot card reading with key judgements being made based on the flimsiest of evidence. Here are some key issues to consider:

Context Blindness:
Firstly, classroom observation scores are strongly influenced by the types of students teachers work with, with those teaching higher-achieving students often receiving better ratings. This is especially true for subject specialists compared to generalists. Teachers are often assigned to classes in ways that favour this pattern, making observation scores less fair. Scores tend to reflect how well a teacher manages the classroom or creates a positive environment, but they are less impacted by teaching strategies which lead to long-term gains. These observation scores also vary a lot from year to year because they depend so much on the class dynamics.

Rater expertise:
Evaluators often lack the necessary subject-matter knowledge to make informed judgments about discipline-specific instructional practices. Even within a discipline there is often a lack of knowledge about what is being taught. An ex-Maths teacher doesn't always have the content knowledge to truly evaluate every Maths lesson. This gap in expertise can lead to superficial evaluations that miss the often unseen and covert aspects of effective instruction.

Generic observation instruments:
The problem of generic observation frameworks which are designed to be broadly applicable across subjects, actually fail to capture the unique pedagogical practices required for different disciplines. Also, decoupling curriculum from instruction is a major step backward. A very good example of this from Christine Counsell is senior leaders using verbs like “describe,” “explain,” and “evaluate” as some kind of indicator of effective learning processes, but which cause a disconnect with the disciplinary focus of causal explanations in history for example. This disconnect reflects a broader clash in education between subject-specific curricula and generic aims focused on perceived utility.

Perverse incentives:
Lessons that were considered 'outstanding' 10-15 years ago were often all-singing/all-dancing Cirque De Soleil style lessons with students running around the room and writing on posters on the wall. These lessons were rolled out for inspections, never to be seen again. We now know that engagement doesn't always mean learning and that being cognitively active doesn't have to mean being physically active. The idea that learning is an observable phenomenon has some evidence such as precision teaching and the work of Ogden Lindsley but this is a very specific methodology which is rare in most classrooms. In my experience, most of the time when observers (inspectors/ leadership) have cited evidence for learning, they have cited performance not learning. journals.sagepub.com/doi/full/10.31…
So what is a better way of thinking seriously about lesson observations and the judgement of instructional quality?
In our book 'How Teaching Happens', @P_A_Kirschner , @DrJimHeal and I wanted to consider this problem of teacher effectiveness so we focussed on David Berliner's body of work and his seminal paper 'Learning About and Learning From Expert Teachers'. In terms of judging teacher quality, let's start off by what we mean by teacher expertise. If we want to identify good teaching then we need to be explicitly about that that actually means. Here, Glaser is useful:

▶ Expertise is domain specific, takes countless hours to achieve, and continues to develop throughout one’s career.

▶ Expertise development is not linear; it makes jumps and stagnates on plateaus.

▶ Experts’ knowledge is better structured than novices’.

▶ Experts’ representations of problems are deeper and richer than novices’.

▶ Experts recognise meaningful patterns faster than novices do.

▶ Experts are more flexible, more opportunistic planners, and can change representations faster than novices.

▶ While experts may start solving a problem slower than a novice, they’re – in the long run – faster problem solvers.

▶ Expert have automatised many behaviours allowing easier and quicker processing of more complex information researchgate.net/publication/22…
Read 7 tweets
Nov 3
One thing we hear over and over again is that we should be teaching creativity in schools. We also often assume that certain people are just more creative than others. What is the evidence for this? A new paper examined fifty years of research. Thread ⬇️ 🧵

This paper asks a basic question: are some people simply born more creative, regardless of the subject or field and if so, what does this mean for education? this idea, known as the "domain-general hypothesis", suggests that a person with a high level of general creativity could excel in any creative task they pursue.
The article questions whether we should be testing for this general creative ability, if we should teach it and whether "general creativity" is even a thing.Image
To test this, researchers studied how participants complete creative tasks in various domains like writing, visual arts, or problem-solving.
The general creative type or "domain-general hypothesis" would be supported if participants consistently demonstrated above-average creativity across all these domains. However, this review found several limitations:Image
Firstly, studies rarely measured participants’ existing knowledge in the tested domains, meaning their success could be attributed to pre-existing specific knowledge rather than a general creative ability.
For example, someone who performs well on a creative writing task might already have a strong vocabulary and knowledge of writing techniques. Their success might be attributed to this existing knowledge rather than a general creative ability.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(