I've been working on a project that is a bit niche. It's not finished yet as I have to finish other stuff, but it tries to tap into the iconic status of the Countdown show. It is running since 1982 en.wikipedia.org/wiki/Countdown…
A show typically has letter/word rounds (I'm less interested in those) and number rounds (yes!). EVen from when I was young - in the Netherlands we had a variant called 'cijfers and letters'- I have been intrigued by solution processes.
For example many players would just do a number times 100 and then add some other numbers. Others seemed to have more insight in arithmetical properties.
The rules. 6 numbers are chosen from stacks of numbers. There are two groups: 20 "small numbers" (two each of 1 through 10), four "large numbers" of 25, 50, 75 and 100. A random number is generated from a uniform distribution (101 to 999 I think).
Then in 30 seconds you need to try and get as close as possible to the target number, using the four main operations. Only whole numbers, you can use every number only once. You get the picture (you will probably recognise it).
Sometimes solutions are mindblowing, for example this one:
Sure it could be done more economically, but these strategies fascinate me.
Could they be studied?
I encountered a website that manually keeps historical records of all Countdown episodes.
Ah, a mining challenge!
I wrote a script that scaroed all 6373 epsiodes from the website and then extracted more than 40000 number rounds from them.
As these are manual records there were some issues, typos, inconsistencies etc, but it worked quite nicely. I especially wanted to automatically double-check if the calculation really led to the answer.
After all, any syntax error should be corrected. I think there were about 2000 that had to be corrected manually. I'm still doing that, it's the boring part.... :-)
Mind you, I have to rely on these historical records, of course, I've already noted some omissions. But tens of thousands calculations is already interest, I think.
These sums can be studied regarding structure.
The plan is to use sequence analysis to study the solution patterns, also in relation to success and achievement. I've actually done some already but there are some conceptual choices still to make.
As there are too many combinations of unique numbers, I make the same distinction as in the game show, Big and Small numbers. There are nice statistical and visual ways, I think, to make strategies more visible. An experiment below.
What I didn't realise, while parsing the sums, is the large role of (superfluous) brackets.
I was happy to see that the taregt numbers indeed seem to adhere to a uniform distribution :-)
I'm excited I am using data from an iconic game show. Will pick up again soon.
One operational challenge is that the matrices for sequence analysis with 34000+ rows with up to 23 'elements' (longest formula, with brackets) amount to Gigabytes...systems at home are struggling, really need the better system in my office.....
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I have never had an issue with procedural knowledge. I am fed up, though, with the misleading analogies with early phonics. Procedural and conceptual knowledge go hand-in-hand at all ages.
Now some folk will say that that will still be the case if you 'push back' content to later education phases, there is a risk that every phase will say 'the next one will have to do it'. This is why we always must keep both procedural and conceptual knowledge firmly in focus.
TBH I was also surprised by the 'pleasure' link. Glad to see it, but recently I've not seen it mentioned much in what I would call 'science of learning' views. They tend to one-sidedly highlight the achievement-to-motivation direction, when it's bidirectional.
I never read Nuthall's The Hidden Lives of Learners before today, after so many mentions of it over the years. I must say that personally I was a bit underwhelmed. I'm sure his career is impressive...and maybe I should have mainly seen it is a convincing narrative...
But if the book argued to be evidence-based I thought the claims were quite hard to check, and the book itself rather low on research detail. Let's just say I expected more.
Just put in a few direct article and page references for key claims; how hard is that? Now I have to do quite some work to find claims like 'three times confronted with knowledge' and the '80% from others 80% wrong '. Maybe someone can give the exact studies?
We've known it because unfortunately this is not really a 'new study' (maybe a few small changes) but yet another re-analysis of PISA 2012. All countries were already included by Caro et al. (2015) researchgate.net/publication/28… - also PISA 2015 sliced and diced to death.
So, we are talking about the same source and there's much to say about the scales (the casual way in which the paper equates scales reminds me of papers that declare inquiry, PBL, student-orientation all the same, when they're not).
It might be the case that it appeared in this quite unremarkable journal because it basically already had been done. One thing I would check is the within-country variance.
There have been quite a few people who did not seem up-to-date with decades of literature around online and blendec learning, but feel expert because of online learning during the pandemic.
And it’s not that it isn’t worthwhile to keep on studying the determinants of effective learning, it’s just that my sense is that there is a lot of reinventing the wheel. Take some of the OU stuff from ages ago with quizzes and more open answers….
…multiple choice quizzing with a bit of spacing imo then is rather underwhelming. Sure, sometimes things just take a ‘crisis’ (the pandemic in this case) to make a step change, but can Injust ask to read up on the history of online learning?
When on edutwitter some people don't want to talk about terminology, it isn't always because they have a good eye for 'obfuscation' and 'relevance', but because they need a 'persuasive definition' for their semantic sophistry.
Take the recent inquiry/explicit convos. For inquiry you need to be able to bunch all criticism together, so you can use it all interchangeably, and paint a field that uniformly fails.
With explicit instruction, direct instruction, Direct Instruction, Explicit Direct Instruction, despite wildly different with different evidence bases (many positive), you can then just talk about it as a coherent, clear, field...
Reading the Ofsted maths review a bit more. I really think the categorisation of knowledge is very limited with declarative, procedural and conditional knowledge. The latter is not used a lot afaik but is metacognitive and strategic in nature (but metacognition not mentioned).
With Rittle-Johnson et al’s (and others) work on procedural and conceptual knowledge, I especially find the omission or rephrasing of ‘conceptual’ notable. The word ‘conceptual’ appears in sevral places….
… in relation to ‘fluency’.
… in the table under ‘declarative’ as ‘relationships between facts’ (conceptual understanding)
… ‘develop further understanding through applying procedures’
… in a table under ‘procedural’
…