When people discuss CLT effects I seldom hear them mention that Sweller et al. (2019) themselves call some 'compound effects', which he imo rather vaguely calls them 'not a simple effect' but 'an effect that alters the characteristics of other cognitive load effects' (p. 276).
Interestingly, compound effects 'frequently indicate the limits of other load effects.'. In other words, in some contexts effects that might be relevant, are not relevant any more because of such effects.
Five effects are deemed 'compound effects'. One of the 'old effects' is element interactivity, where there is a distinction between learning materials with high and low element interactivity (let me just say 'complexity of the materials').
Expertise reversal and Guidance fading might be the most well-known ones. Effects might not be applicable for complete novices (but when do they become relevant then? Could be after 30 minutes?) or at the beginning of a learning sequence.
The transient information effect is known but do people know it is a compound effect? "Cognitive load effects that are found for transient information are typically not found for non-transient or less transient information." (p. 267).
This again strikes me quite vague (some e.g.s. of effects are mentioned), especially 'less transient'. From memory, I think the 'transient information effect' already happened with 15 words.
The last one is called the 'self-management effect', which says that when you encounter ill-designed materials, you can explicitly teach learners how they themselves can reduce such extraneous load.
Of course, in an ideal world you would have perfect materials, but the last effect does beg the question, in my opinion, whether being able to deal with instances of 'non-optimal-load' might be more useful. After all, a student can self-manage that, not teachers' materials.
In any case, it seems important to note that Cognitive Load Theory's own accounts of the theory, there are five effects that interact with others, and that it never can be a case of adopting CLT principles 'to reduce load'.
I’ve been searching a bit more where these effects have been called ‘compound effects’ but for now I can only find the 2019 article. That imo makes it quite a sudden, substantial change...
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I've been working on a project that is a bit niche. It's not finished yet as I have to finish other stuff, but it tries to tap into the iconic status of the Countdown show. It is running since 1982 en.wikipedia.org/wiki/Countdown…
A show typically has letter/word rounds (I'm less interested in those) and number rounds (yes!). EVen from when I was young - in the Netherlands we had a variant called 'cijfers and letters'- I have been intrigued by solution processes.
For example many players would just do a number times 100 and then add some other numbers. Others seemed to have more insight in arithmetical properties.
I find it quite difficult to explain this but I’ll keep on trying. It’s about the condtant change in how ‘knowledge’ is meant, as in certain specific knowledge is good for knowing thst dpecifuc knowledge, versus general claims about knowledge.
The tweet was about transfer of course but quite often those commenting on transfer combine it with the domain-specificity of knowledge. Take chess. De Groot, Herbert, Simon... or Leslie and Recht’s baseball study....take-away: knowledge matters...
...but of course not any old knowledge matters. The original point is that it matters for assessment on that knowledge. Therefore, imo it is a shift of the use of knowledge, when people say “ergo, I’m a proponent of knowledge curricula” in the sense that ‘they work’.
(I know some will keep on insisting that it 'at least is better than not having it all' but I would argue this really depends on what you're looking at. Often it's a trade-off with other things.)
Of course the paper is medicine oriented but given that some like to make that comparison any way... In social science there often are even more challenging limitations. But the 'randomisation' points here also apply....
initial sample selection bias
You really need to check if that doesn't influence outcomes. Random sample in an independent school? Need to check if generalisable more widely. I had that challenge with some Mental Rotation work in an independent school.
There are loads of things that matter in good research. There is an assumption that if one of the ‘gold standards’ criteria isn’t met, it can’t be good research. I would rather say that it just has a limitation. It would not be good to think b/w here.
What some also seem to forget is that all those criteria matter. So, it’s great you randomised participants but if your measurement is bad....it’s still bad. Or if your comparison groups are poorly chosen....still poor.
Or take intervention materials. You can get everything ‘right’, but if your materials are poor and unlikely to ever be used in a classroom (understandable, maybe you are trying to ‘control’ other things and kept it simple), can we then rely on the findings?
Disappointed with this article. It just repeats what’s already known but with some added ambiguous wording -> Cognitive-Load Theory: Methods to Manage Working Memory Load in the Learning of Complex Tasks journals.sagepub.com/doi/full/10.11…
First the abstract. ‘Productive’ and ‘unproductive’ cognitive load seem the new faboured terms. Reminds me of ‘deliberate’ in ‘deliberate practice’. It’s tautological. Who would want ‘unproductive load’. When is it ‘unproductive’ any way?
It reminds me of the ‘good load’ that used to be ‘germane load’. But of course ‘germane load’ was supposedly removed as independent load type. Yet ‘germane’ still plays a role. The article mentions ‘germane processing’ and a distributive role between intrinsic and extraneous load
“In particular, the ‘methods’ sections of such papers are vital because they demonstrate exactly what was measured, rather than what the researchers chose these measures to mean.”