(I know some will keep on insisting that it 'at least is better than not having it all' but I would argue this really depends on what you're looking at. Often it's a trade-off with other things.)
Of course the paper is medicine oriented but given that some like to make that comparison any way... In social science there often are even more challenging limitations. But the 'randomisation' points here also apply....
initial sample selection bias
You really need to check if that doesn't influence outcomes. Random sample in an independent school? Need to check if generalisable more widely. I had that challenge with some Mental Rotation work in an independent school.
achieving-good-randomisation assumption/incomplete baseline data limitation
Although some assume 'random' is enough to say (often with little info), you do really need to check if there really is balance, especially on key traits.
The article also talks about blinding, a tricky aspect in education research. Especially classroom trials virtually impossible to do. But that does remain a limitation then...
Teacher in a school and you run a study and every student knows? That could be an issue. Even more so if you lead an intervention. My point not that useless but that there are limitations and hard to say one per se 'worse' than others...
I have seen RCTs with poor materials that seemed less useful than quasi-experiments with great materials. I have seen qualitative observational studies that gave less insight than RCTs-with-process-eval. Horses for courses.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We've known it because unfortunately this is not really a 'new study' (maybe a few small changes) but yet another re-analysis of PISA 2012. All countries were already included by Caro et al. (2015) researchgate.net/publication/28… - also PISA 2015 sliced and diced to death.
So, we are talking about the same source and there's much to say about the scales (the casual way in which the paper equates scales reminds me of papers that declare inquiry, PBL, student-orientation all the same, when they're not).
It might be the case that it appeared in this quite unremarkable journal because it basically already had been done. One thing I would check is the within-country variance.
There have been quite a few people who did not seem up-to-date with decades of literature around online and blendec learning, but feel expert because of online learning during the pandemic.
And it’s not that it isn’t worthwhile to keep on studying the determinants of effective learning, it’s just that my sense is that there is a lot of reinventing the wheel. Take some of the OU stuff from ages ago with quizzes and more open answers….
…multiple choice quizzing with a bit of spacing imo then is rather underwhelming. Sure, sometimes things just take a ‘crisis’ (the pandemic in this case) to make a step change, but can Injust ask to read up on the history of online learning?
When on edutwitter some people don't want to talk about terminology, it isn't always because they have a good eye for 'obfuscation' and 'relevance', but because they need a 'persuasive definition' for their semantic sophistry.
Take the recent inquiry/explicit convos. For inquiry you need to be able to bunch all criticism together, so you can use it all interchangeably, and paint a field that uniformly fails.
With explicit instruction, direct instruction, Direct Instruction, Explicit Direct Instruction, despite wildly different with different evidence bases (many positive), you can then just talk about it as a coherent, clear, field...
Reading the Ofsted maths review a bit more. I really think the categorisation of knowledge is very limited with declarative, procedural and conditional knowledge. The latter is not used a lot afaik but is metacognitive and strategic in nature (but metacognition not mentioned).
With Rittle-Johnson et al’s (and others) work on procedural and conceptual knowledge, I especially find the omission or rephrasing of ‘conceptual’ notable. The word ‘conceptual’ appears in sevral places….
… in relation to ‘fluency’.
… in the table under ‘declarative’ as ‘relationships between facts’ (conceptual understanding)
… ‘develop further understanding through applying procedures’
… in a table under ‘procedural’
…
Ok, some thoughts on E.D.Hirsch’s latest book. To be honest, I’ve seen/heard 4 or 5 interviews with him so some of that might be mixed in.
Let me begin by saying that it’s quite clear that a desire for social justice really drives Hirsch. I seems passionate in both audio and writing. Several people, including himself, gave called this (last) book his most pronounced.
I can see that but I do think because of that some facts suffer. This is why I thought it wasn’t as good as ‘Why knowledge matters’ (I wrote bokhove.net/2017/02/17/hir… about that book).
When people discuss CLT effects I seldom hear them mention that Sweller et al. (2019) themselves call some 'compound effects', which he imo rather vaguely calls them 'not a simple effect' but 'an effect that alters the characteristics of other cognitive load effects' (p. 276).
Interestingly, compound effects 'frequently indicate the limits of other load effects.'. In other words, in some contexts effects that might be relevant, are not relevant any more because of such effects.
Five effects are deemed 'compound effects'. One of the 'old effects' is element interactivity, where there is a distinction between learning materials with high and low element interactivity (let me just say 'complexity of the materials').