There are loads of things that matter in good research. There is an assumption that if one of the ‘gold standards’ criteria isn’t met, it can’t be good research. I would rather say that it just has a limitation. It would not be good to think b/w here.
What some also seem to forget is that all those criteria matter. So, it’s great you randomised participants but if your measurement is bad....it’s still bad. Or if your comparison groups are poorly chosen....still poor.
Or take intervention materials. You can get everything ‘right’, but if your materials are poor and unlikely to ever be used in a classroom (understandable, maybe you are trying to ‘control’ other things and kept it simple), can we then rely on the findings?
I have seen exemplary RCTs but if you then drilled down to the sample, there were obvious imbalances. Sometimes ‘social science’ or ‘education reseach’ is criticised (pointing towards medicine for example) but it can pose issues there as well. tandfonline.com/doi/full/10.10…
Some find all these tweets perhaps ‘postmodern’, but you’d be mistaken to think that. I value rigorous design, I value rigorous measurements. Whether qualitative or quantitative, we need to do the best research we can. But caricatures are pointless.
Perhaps even working complementary is best, possibly in teams.
An observation study of teachers.
Some secondary data analysis.
A focus group with students.
Some design reseach to improve the materials.
A trial of the materials in a large randomised sample.
Etc.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I have never had an issue with procedural knowledge. I am fed up, though, with the misleading analogies with early phonics. Procedural and conceptual knowledge go hand-in-hand at all ages.
Now some folk will say that that will still be the case if you 'push back' content to later education phases, there is a risk that every phase will say 'the next one will have to do it'. This is why we always must keep both procedural and conceptual knowledge firmly in focus.
TBH I was also surprised by the 'pleasure' link. Glad to see it, but recently I've not seen it mentioned much in what I would call 'science of learning' views. They tend to one-sidedly highlight the achievement-to-motivation direction, when it's bidirectional.
I never read Nuthall's The Hidden Lives of Learners before today, after so many mentions of it over the years. I must say that personally I was a bit underwhelmed. I'm sure his career is impressive...and maybe I should have mainly seen it is a convincing narrative...
But if the book argued to be evidence-based I thought the claims were quite hard to check, and the book itself rather low on research detail. Let's just say I expected more.
Just put in a few direct article and page references for key claims; how hard is that? Now I have to do quite some work to find claims like 'three times confronted with knowledge' and the '80% from others 80% wrong '. Maybe someone can give the exact studies?
We've known it because unfortunately this is not really a 'new study' (maybe a few small changes) but yet another re-analysis of PISA 2012. All countries were already included by Caro et al. (2015) researchgate.net/publication/28… - also PISA 2015 sliced and diced to death.
So, we are talking about the same source and there's much to say about the scales (the casual way in which the paper equates scales reminds me of papers that declare inquiry, PBL, student-orientation all the same, when they're not).
It might be the case that it appeared in this quite unremarkable journal because it basically already had been done. One thing I would check is the within-country variance.
There have been quite a few people who did not seem up-to-date with decades of literature around online and blendec learning, but feel expert because of online learning during the pandemic.
And it’s not that it isn’t worthwhile to keep on studying the determinants of effective learning, it’s just that my sense is that there is a lot of reinventing the wheel. Take some of the OU stuff from ages ago with quizzes and more open answers….
…multiple choice quizzing with a bit of spacing imo then is rather underwhelming. Sure, sometimes things just take a ‘crisis’ (the pandemic in this case) to make a step change, but can Injust ask to read up on the history of online learning?
When on edutwitter some people don't want to talk about terminology, it isn't always because they have a good eye for 'obfuscation' and 'relevance', but because they need a 'persuasive definition' for their semantic sophistry.
Take the recent inquiry/explicit convos. For inquiry you need to be able to bunch all criticism together, so you can use it all interchangeably, and paint a field that uniformly fails.
With explicit instruction, direct instruction, Direct Instruction, Explicit Direct Instruction, despite wildly different with different evidence bases (many positive), you can then just talk about it as a coherent, clear, field...
Reading the Ofsted maths review a bit more. I really think the categorisation of knowledge is very limited with declarative, procedural and conditional knowledge. The latter is not used a lot afaik but is metacognitive and strategic in nature (but metacognition not mentioned).
With Rittle-Johnson et al’s (and others) work on procedural and conceptual knowledge, I especially find the omission or rephrasing of ‘conceptual’ notable. The word ‘conceptual’ appears in sevral places….
… in relation to ‘fluency’.
… in the table under ‘declarative’ as ‘relationships between facts’ (conceptual understanding)
… ‘develop further understanding through applying procedures’
… in a table under ‘procedural’
…