Reading the Ofsted maths review a bit more. I really think the categorisation of knowledge is very limited with declarative, procedural and conditional knowledge. The latter is not used a lot afaik but is metacognitive and strategic in nature (but metacognition not mentioned).
With Rittle-Johnson et al’s (and others) work on procedural and conceptual knowledge, I especially find the omission or rephrasing of ‘conceptual’ notable. The word ‘conceptual’ appears in sevral places….
… in relation to ‘fluency’.
… in the table under ‘declarative’ as ‘relationships between facts’ (conceptual understanding)
… ‘develop further understanding through applying procedures’
… in a table under ‘procedural’
…
… but later on it is said that rehearsal alongside ‘the development of conceptual understanding’. So the term is used, but not really defined except subordinate of other categories. I think this is not in line with Rittle-Johnson et al. uni-trier.de/fileadmin/fb1/…
Mind you, ‘hand-in-hand’ is mentioned with reference to this article in footnote 85. But in footnote 30 it again is one direction that is emphasised. At very least, I think the review is ambivalent, starting with the choice of a categorisation that does not include ‘conceptual’.
Based on research, I think ‘conceptual’ would be appropriate (Rittle-Johnson, Adding it Up) and ‘conditional’ perhaps clearer as ‘metacognitive knowledge’. You know, basically ‘revised Bloom’ 🤫 @ryandal
‘Problem solving’ is another one of those illustrious terms. I certainly do agree that the term can be opaque and also that -like Rittle-Johnson- procedural knowledge interlinked. However, I find the way it is depicted limited again, and mainly ‘edutwitter’s greatest hits’.
Footnote 26 starts with Alexander and Schoenfeld, but then immediately winds back with what I often consider a strawman, namely ‘problem solving as generic skill’. The reference is to Sweller’s work but tbh the article is rather superficial app.nova.edu/toolbox/instru…
But admittedly, there are more extensive articles by the authors, arguing similar things. The definitions of ‘problem solving’ but also, I would argue, ‘conceptual knowlede’ are important here. Footnote 28 includes the book by Bransford et al… nap.edu/read/9853/chap…
I thougt the book was an indiction, again, of a broader view on learning, as it has plenty on conceptual, problem solving, transfer etc.. but the reference is to a few pages with esearch on….chess….chess has become the go-to topic (also in footnote 27) to…
…supposedly make all sorts of claims about knowledge in general. Anecdotally, based on my own chess experience, becoming more expert in chess *also* is a combination of practice, playing games, studying examples etc…I find the characterisation of it often lacking.
Let alone that what we learn/know about chess might say little about more complex domains. And people know that, becuse they love to cite how chess courses do not transfer to other domains.. (e.g. Sala and Gobet)… #tobecontinued
Actually, the next pattern is the motivation part. I have often written about that. Bi-directional. The Ma and Xu paper does highlight one direction but only if you have a limited view of the evidence base you would prioritise it. The same regarding ‘anxiety’.
I will note that footnote 43 adds an important element re self-concept. There even is something metacognitive in the sense of ‘judgement of knowing’, which imo again highlights the importance of that type of (conditional) knowledge.
I think some furore was around the claim ‘using games can lead to less learning than more’ in the context of ‘motivation’. I understand why. Although I applaud a seemingly omnivorous approach to research (not just RCTs),…. link.springer.com/article/10.100…
…I thought the limited nature of the study was an issue. But I also think the results do not show less learning but that they ‘do not help’, so the formulation seems problematic. I also think there are better overviews of the use of games.
What I liked in the review is the emphasis on how much we can do to support development of maths (so not a belief in ‘naturals’). In my opinion, we can absolutely build a strong foundation. However, despite some places where ‘both’ said to be deemed important…
…I again thought that ‘facts’, ‘prioritising declarative knowledge’, ‘core content’, ‘automatic recall’ etc. more sent out the message of one ‘side’. Of vourse, you coukd say that not mentioning ‘conceptual’ does not mean it is not deemed impirtant, but tbf I…
…think the overall picture, as mentioned before, is unbalanced (and therefore not in line with what I know about the evidence). As an aside, with footnote 64 shape is mentioned but wasn’t the shape ELG removed @helenjwc ?
I will emphasise yet again, polarising ‘thinkers’ might otherwise misrepresent me, that practice, rehearsal, facts absolutely are important. But I’m just saying that they go hand-in-hand with conceptual knowledge.
Sometimes, something similar is said, by the way, for example ‘conceptual building blocks of algebraic thinking are systematically planned into the earliest of curriculum stages’ in relation to other countries. This is why I don’t understand why the review -in my opinion-…
…more sets up opposition between the procedural and conceptual. In fact, I would even say (e.g. see the Two/Four Basics or variation) that East Asian countries manage to symbolise perfectly how they are interlinked.
I think the review has some useful warnings on the use of manipulatives. However, it doesn’t say much about the positive effects, even though Willingham and Fyfe are referenced (footnotes 72 and onward).
It’s interesting -again the Asian link- the manipulative of a counting frame/soroban is then highlighted. Maybe rightly so, with the rekenrek developments, but for me it does jar: careful with manipulatives…except this one.
It is around footnote 85 ‘balance’ is promoted again, with the aforementioned review by Rittle-Johnson, Schneider and Star (2015). This is good, the preceding sections imo also needed more of such balance.
The section on methods for working algebraically has some useful references to representations. I don’t think ‘contextualised representations’ and ‘abstract representations’ should be set in opposition to eachother. Rather, research on concreteness fading (Fyfe) and…
…the CPA (Concrete, Pictorial, Abstract) popularised by Singapore (but rather ‘imported’ from ‘the west’in the 80s) show that they can also go hand-in-hand.
The review then gets into ‘conditional knowledge’. Recall that in the categorisation of knowledge, I think ‘conceptual’ is sorely lacking. This ‘conditional knowledge’ can be seen more as metacognitive and/or strategic knowledge.
There are some nice references including this article by Schoenfeld (typo in ref list) mathed811fall2014.pbworks.com/w/file/fetch/8… - I like the mention of ‘deep structure’, but as mentioned before imo the description of ‘problem solving’ is limited.
It’s good the review highlights the role of language, for example word problems. There actually are some more links that can be made with the TIMSS relationship report and word problem research by Sweller. slideshare.net/cbokhove/ametn…
I have no problem with the subsequent emphasis on ‘classes of problems’. In fact, it’s likely that learners will approach it that way anyway (e.g. have to think bout Chi’s categorical shoft theory or Ohlsson’s resubsumption tandfonline.com/doi/abs/10.108…) but as mentioned…
..the ‘problem-solving=generic’ seems like a caricature to me, as it will always be an interplay of factors (sone of which are mentioned, like practice, denoting deep structure). IMO there is no need to place them in opposition.
IMO a big risk of doing so is that, for example, some will mistakenly assume conceptual knowledge will follow automatically. And especially in a time-constrained curriculum it might fall away if at ‘the end’ of a curriculum sequence. Hand-in-hand.
OK, need to take a break. There are a few more new things to discuss/uncover but many points are in line with the previously sketched larger picture. #tobecontinued
I must say that it is harder to report on the second half of the review because I feel so much is ‘more of the same’ e.g. cautious giving pupils ownership, balancing new and rehearsing old content, but also imo a limited view on East Asian countries (footnote 112).
The references don’t really to say much about that, to be honest. For example, Binder and Watkins (latter is a known name to me revDirect Instruction) and a limited case study. As said, I like the inclusive view on research (no RCT snobbery) but the downside is generalisability.
There then follow sections on learning maths for students with learning difficulties. I don’t know enough about that, but I would agree with a gist that a lot of ‘good teaching’ for all students also would be good for them.
I did not like the term ‘powerful declarative memory system’…but it does seem based on prior writings, for example spectrumnews.org/opinion/powerf…
In line with what I said previously, a risk of course is that no attention is given to anything else, given time-constraints. I think at all levels it is possible to keep the ‘hand-in-hand’ approach to procedural and conceptual knowledge ‘alive’, but perhaps in differing depth.
Then, arguably the nost sensitive sections…about pedagogy. I can’t help but think about the furore around prescribed methods. We know that the inspection’s views will be taken very seriously. Footnote 120: dedicated time for teaching and practice…ok…
…but I think the language does start to become more coloured…again ‘Not all pupils will discover or invent….by themselves’…and they ‘need more than natural learning’ (whatever is meant by it)…ok true…but they are not, right?
Sure, I don’t see much wrong in incorporating ‘extra elements of explicit, systematic instruction’ but given the way existing practices are described, I’m not sure the description of current practices fair. Furthermore, claims are bold (beginnings 😎):
“This will help to close the school entry gap in knowledge. It will also give more pupils the foundations for mathematical success, as well as greater self-esteem.” - I think different approaches are just conveniently grouped together too easily.
To be honest, I think Project Follow Through and D.I. research deserves a mention but the claim is strong. A more critical discussion is needed (e.g. in April this EEF pilot with DI derived Connecting Maths educationendowmentfoundation.org.uk/projects-and-e…).
Footnote 123 is another reference to Binder and Watkins on D.I. and indeed outcomes for self-esteem did improve there. Of course a relevant Q remains whether the horserace design and it being a social plan, easily applied to other contexts.
Although I know there are varying views on ‘variation’ in ‘variation theory’, not in the least re conceptual and procedural variation,think it’s nice it’s mentioned. Actually, it’s a shame procedural and conceptual aren’t mentioned,as it could really emphasise the ‘hand-in-hand’.
A large section then goes on to ‘consolidation of learning’. Important. Again, I don’t necessarily have an issue with the emphasis on practice but as stated earlier it. But sections on quantity and quality mainly emphasise practice. I probably need to dig deeper in the footnotes.
I thought the emphasis on practice for consolidation fine, I have already said what I missed. The homework and textbook points are correct imo. In secondary (cos TIMSS and PISA refer to that) there indeed is relatively little homework.
Coming from a country with I think more of a homework and textbook tradition, I think they have positive effects (evidence for another time). I do think the typology of other countries verges to caricature though.
The ‘quality’ section continues with the textbook emphasis. Interestingly, some positive comments about ‘games’ are made, which contrast nicely with the initially critical comments about games.
I thought the section ‘Tasks that are content-focused and achievable’ was quite nice, but tried to do a lot, including comments on sound in the classroom and groupwork. I wasn’t always convinced re the references. E.g. on noise edweek.org/leadership/low… was cited…
…but that article mentions both research/comments critical of noise but also how noise can be ‘blocked out’. In any case, it is more ‘one way’ (e.g. Massonnie?).
I like how scaffolding (Bruner?) is mentioned but reporting mainly warns us against misuse of manipulatives. But although boundary counditions are important, would have been good to read the positive case (semanticscholar.org/paper/A-meta-a…), especially differential effects knowledge types.
The later parts seem to emphasise ‘balance’ a bit more, which is good, but this contrasts with the first half, in my opinion. But it resurfaces sometimes. For example with assessment, again proficiency->motivation, games suddenly do seem to be useful etc.
It is useful that the distinction between exams/performance and learning are mentioned, with a mixture of approaches deemed best. I think low-stakes ‘tests’ (or broader, retrieval opportunities) can be useful. But other claims are (too) strong.
“but lack of proficiency that causes this performance anxiety” - goes back again to the ‘motivation’ section. Far more aspects are involved here including self-concept.
You might have noticed that I am citing fewer footnotes here, there are too many to read in detail. The review finises with a ‘school level systems’ section with a strange section called ‘caclulation and presentation’which doesn’t seem to fit, nor has references.
The final section is reserved for an important topic re professional development. It has sensible recommendations incl ‘non-surface-level’ lesson study. It’s a shame ITE gets a dig again, based on Ofsted’s own January 2020 research. I’m sure things can be improved but…
…I do think that in a system that linked Ofsted judgements to if you were allowed to have trainees (min good or outstanding) there could be some more critical reflection.
OK, enough, I think the broad brushes for each section are fair. I have not read and followed up *all* the references (yet). For now, I will conclude by forcing me to give three best things about the report and three worst things. Nicely, black and white, just like you want it 😉
(I haven’t thought too much about any diplomatic wording here, I’m sure I can refine them.)
Good things
- Exudes that Ofsted wantsnoone left behind. Everyone can do maths, from lower prior knowledge to higher, difficulties etc.
- Plenty of attention practice and procedures is good. In some circles undervalued.
- Inclusive of several research types.
Worst things
- Mention bidirectional procedural & conceptual knowledge, little attention latter. Strange knowledge types categorisation
- 1-sided & misleading accounts evidence themes like problem-solving, motivation, manipulatives
- Other countries’ practices not described well
Ok, some thoughts on E.D.Hirsch’s latest book. To be honest, I’ve seen/heard 4 or 5 interviews with him so some of that might be mixed in.
Let me begin by saying that it’s quite clear that a desire for social justice really drives Hirsch. I seems passionate in both audio and writing. Several people, including himself, gave called this (last) book his most pronounced.
I can see that but I do think because of that some facts suffer. This is why I thought it wasn’t as good as ‘Why knowledge matters’ (I wrote bokhove.net/2017/02/17/hir… about that book).
When people discuss CLT effects I seldom hear them mention that Sweller et al. (2019) themselves call some 'compound effects', which he imo rather vaguely calls them 'not a simple effect' but 'an effect that alters the characteristics of other cognitive load effects' (p. 276).
Interestingly, compound effects 'frequently indicate the limits of other load effects.'. In other words, in some contexts effects that might be relevant, are not relevant any more because of such effects.
Five effects are deemed 'compound effects'. One of the 'old effects' is element interactivity, where there is a distinction between learning materials with high and low element interactivity (let me just say 'complexity of the materials').
I've been working on a project that is a bit niche. It's not finished yet as I have to finish other stuff, but it tries to tap into the iconic status of the Countdown show. It is running since 1982 en.wikipedia.org/wiki/Countdown…
A show typically has letter/word rounds (I'm less interested in those) and number rounds (yes!). EVen from when I was young - in the Netherlands we had a variant called 'cijfers and letters'- I have been intrigued by solution processes.
For example many players would just do a number times 100 and then add some other numbers. Others seemed to have more insight in arithmetical properties.
I find it quite difficult to explain this but I’ll keep on trying. It’s about the condtant change in how ‘knowledge’ is meant, as in certain specific knowledge is good for knowing thst dpecifuc knowledge, versus general claims about knowledge.
The tweet was about transfer of course but quite often those commenting on transfer combine it with the domain-specificity of knowledge. Take chess. De Groot, Herbert, Simon... or Leslie and Recht’s baseball study....take-away: knowledge matters...
...but of course not any old knowledge matters. The original point is that it matters for assessment on that knowledge. Therefore, imo it is a shift of the use of knowledge, when people say “ergo, I’m a proponent of knowledge curricula” in the sense that ‘they work’.
(I know some will keep on insisting that it 'at least is better than not having it all' but I would argue this really depends on what you're looking at. Often it's a trade-off with other things.)
Of course the paper is medicine oriented but given that some like to make that comparison any way... In social science there often are even more challenging limitations. But the 'randomisation' points here also apply....
initial sample selection bias
You really need to check if that doesn't influence outcomes. Random sample in an independent school? Need to check if generalisable more widely. I had that challenge with some Mental Rotation work in an independent school.
There are loads of things that matter in good research. There is an assumption that if one of the ‘gold standards’ criteria isn’t met, it can’t be good research. I would rather say that it just has a limitation. It would not be good to think b/w here.
What some also seem to forget is that all those criteria matter. So, it’s great you randomised participants but if your measurement is bad....it’s still bad. Or if your comparison groups are poorly chosen....still poor.
Or take intervention materials. You can get everything ‘right’, but if your materials are poor and unlikely to ever be used in a classroom (understandable, maybe you are trying to ‘control’ other things and kept it simple), can we then rely on the findings?