Disappointed with this article. It just repeats what’s already known but with some added ambiguous wording -> Cognitive-Load Theory: Methods to Manage Working Memory Load in the Learning of Complex Tasks journals.sagepub.com/doi/full/10.11…
First the abstract. ‘Productive’ and ‘unproductive’ cognitive load seem the new faboured terms. Reminds me of ‘deliberate’ in ‘deliberate practice’. It’s tautological. Who would want ‘unproductive load’. When is it ‘unproductive’ any way?
It reminds me of the ‘good load’ that used to be ‘germane load’. But of course ‘germane load’ was supposedly removed as independent load type. Yet ‘germane’ still plays a role. The article mentions ‘germane processing’ and a distributive role between intrinsic and extraneous load
This is why I previously found claims that ‘germane load has been removed’ unjustified. It has been repackaged and tucked away, but actually still there. With its strengths and weaknesses and all.
Like so many CLT articles the first pages is devoted to summarising. For example, the newly ‘found’ evolutionary principles. I seldom read a justification of adopting it, to be honest, just that it is. It remains risky, in my opinion, and also not necessary for the theory.
After all,authors reiterate CLT’s goal is “to optimize learning of complex cognitive tasks by transforming contemporary scientific knowledge on the manner in which cognitive structures and processes are organized (i.e., cognitive architecture) into guidelines for instr design”.
And in my opinion the evolutionary principles are not used to do that but to ‘post hoc’ speculate about the why of some of the findings.....still find it strange it made it to a core underpinning layer....there is no progression there either (studies biol pri/sec?).
Any way, the paper then goes on and describes ‘exemplary methods to manage cognitive load’. Three categories: through the learning tasks, the learner, and the learning tasks. Maybe that categorisation could have been something, but each example imo is woefully short.
Several sections simply refer to other articles and are therefore imo rather uninformative. Mind you, even with more information, most would just double up with other articles, in particular the 20 years on’ article link.springer.com/article/10.100…
For the learning tasks we only get short sections about the split-attention effect, worked examples effect, and guidance-fading effect. The latter is called a ‘compound effect’ and this is mentioned but hardly explained. While it is rather important, as it impacts other effects.
The ones for the learner are notable: collaboration, gesturing and motivational cues. I’m not sure you often see those mentioned by CLT adopters (outside of research, where these authors have mentioned them numerous times).
Good to mention motivation, especially if you have ‘imported’ Geary’s evolutionary work (not my choice). When you read Geary it’s quite clear it is a key element. In my opinion, best not to dismiss as resulting from achievement.
The final category pertains to the ‘learning environment’ and gives short descriptions of attention-capturing stimuli reduction, eye closure and stress-suppressing activities. But we don’t get a lot of detail of the methods that can do that. Often just one study.
The conclusion then also gives quite an important disclaimer: “It is important to note that these characteristics interact and should always be considered by instructional designers as one system in which manipulating one aspect has consequences for the whole system”.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I have never had an issue with procedural knowledge. I am fed up, though, with the misleading analogies with early phonics. Procedural and conceptual knowledge go hand-in-hand at all ages.
Now some folk will say that that will still be the case if you 'push back' content to later education phases, there is a risk that every phase will say 'the next one will have to do it'. This is why we always must keep both procedural and conceptual knowledge firmly in focus.
TBH I was also surprised by the 'pleasure' link. Glad to see it, but recently I've not seen it mentioned much in what I would call 'science of learning' views. They tend to one-sidedly highlight the achievement-to-motivation direction, when it's bidirectional.
I never read Nuthall's The Hidden Lives of Learners before today, after so many mentions of it over the years. I must say that personally I was a bit underwhelmed. I'm sure his career is impressive...and maybe I should have mainly seen it is a convincing narrative...
But if the book argued to be evidence-based I thought the claims were quite hard to check, and the book itself rather low on research detail. Let's just say I expected more.
Just put in a few direct article and page references for key claims; how hard is that? Now I have to do quite some work to find claims like 'three times confronted with knowledge' and the '80% from others 80% wrong '. Maybe someone can give the exact studies?
We've known it because unfortunately this is not really a 'new study' (maybe a few small changes) but yet another re-analysis of PISA 2012. All countries were already included by Caro et al. (2015) researchgate.net/publication/28… - also PISA 2015 sliced and diced to death.
So, we are talking about the same source and there's much to say about the scales (the casual way in which the paper equates scales reminds me of papers that declare inquiry, PBL, student-orientation all the same, when they're not).
It might be the case that it appeared in this quite unremarkable journal because it basically already had been done. One thing I would check is the within-country variance.
There have been quite a few people who did not seem up-to-date with decades of literature around online and blendec learning, but feel expert because of online learning during the pandemic.
And it’s not that it isn’t worthwhile to keep on studying the determinants of effective learning, it’s just that my sense is that there is a lot of reinventing the wheel. Take some of the OU stuff from ages ago with quizzes and more open answers….
…multiple choice quizzing with a bit of spacing imo then is rather underwhelming. Sure, sometimes things just take a ‘crisis’ (the pandemic in this case) to make a step change, but can Injust ask to read up on the history of online learning?
When on edutwitter some people don't want to talk about terminology, it isn't always because they have a good eye for 'obfuscation' and 'relevance', but because they need a 'persuasive definition' for their semantic sophistry.
Take the recent inquiry/explicit convos. For inquiry you need to be able to bunch all criticism together, so you can use it all interchangeably, and paint a field that uniformly fails.
With explicit instruction, direct instruction, Direct Instruction, Explicit Direct Instruction, despite wildly different with different evidence bases (many positive), you can then just talk about it as a coherent, clear, field...
Reading the Ofsted maths review a bit more. I really think the categorisation of knowledge is very limited with declarative, procedural and conditional knowledge. The latter is not used a lot afaik but is metacognitive and strategic in nature (but metacognition not mentioned).
With Rittle-Johnson et al’s (and others) work on procedural and conceptual knowledge, I especially find the omission or rephrasing of ‘conceptual’ notable. The word ‘conceptual’ appears in sevral places….
… in relation to ‘fluency’.
… in the table under ‘declarative’ as ‘relationships between facts’ (conceptual understanding)
… ‘develop further understanding through applying procedures’
… in a table under ‘procedural’
…