Rereading Anderson et al.'s (1977) study on the effect of knowledge on the interpretation of ambiguous text passages...
Participants, who had enrolled in either a weightlifting class or music class, read two ambiguous passages each with two possible interpretations (prison/wrestling and cards/music). They then retold the passages and completed a multiple choice quiz for each passage.
Each quiz had two possible correct answers for each of the interpretations. The weightlifters gave more wrestling consistent answers on the prison/wrestling passage. The music students gave more correct music-consistent answers on the cards/music passage.
In addition, analysis of theme-revealing disambiguations in the retellings were significantly related to subjects' background. It suggests that interpretations of text are heavily influenced by high-level schemata.
Reading comprehension may be more about what the reader brings to the text rather than the 'skill' of making sense of the words and creating meaning from them. More here...thereadingape.com/single-post/20…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The US seem more aware of the issue and have new assessments (PARCC and Smarter Balance) which are explicitly aligned to the Common Core State Standards.
These focus more on pupils providing evidence from texts to support answers and organise texts by disciplinary areas also requiring synthesis across text and the construction of written arguments based on text sets.
Still timed and largely multiple choice but require more critical and disciplined focus comprehension.
Whole word reading instruction was rooted in Cattell’s (1886) research. He carried out a series of laboratory studies at Wundt University in Germany utilising tachistoscopic techniques which measured eye fixation times on letters and words (Rayner et al., 2012).
Cattell (1886) discovered that in ten milliseconds a reader could apprehend equally well three or four unrelated letters, two unrelated words, or a short sentence of four words - approximately twenty-four letters (Anderson and Dearborn, 1952).
The generalisation advanced from this outcome was that words and sentences are easier to read than letters. This resulted in the deduction that humans do not read words by the serial decoding of individual letters but read whole words in their entirety.
Fluency is a continuum and not a threshold. It develops after orthographic skills (automatic word recognition) become embedded - the bottleneck in reading development. This appears to be self-taught (Share, 2004) so requires heavy reading mileage and presentation of words.
Don’t rush through this in a race to fluency and comprehension. Instant word recognition is a key development phase and is predicated on significant code knowledge. Hence the vital contribution from decodable texts.
Instant word recognition is arguably more important in primary education than fluency (which will never develop without it). It can take some time, be laborious and children often sound slow and stuttering whilst it develops but it is crucial.
The 'language surplus' gifted by privilege often conflates early reading instruction and primary education in general as we try to repay a language/privilege deficit with undue haste in an understandable desire for social justice. Cognitive load will always apply.
Far better to ensure foundations are secure - particularly focusing on automaticity (arguably the main concern of primary education) in all areas, and in reading build to fluency slowly and surely. Discrete, global, cultural knowledge is assigned to the curriculum.
The deficit is unlikely to be repaid by the end of primary education but with fluency in place and the associated release of cognitive load, the opportunity for repayment of the deficit and building of substantial surplus and privilege is possible throughout secondary education.
Sorry Mat, only just got to this.Beck and McKeown carried out considerable research on QtA and the studies on teacher activity and use of querying suggested that practices indeed developed. However, in terms of pupil outcomes their research was inconclusive.
Their 1996 study (as I said, this is old stuff) used a control group that received instruction from a basal programme. There were significant differences in favour of QtA between pupils who received QtA instruction and those who read without instructional support.
However, there were no significant differences between pupils who received QtA instruction and the basal instruction control group. These findings were repeated by Garcia et al. (2007) using 'responsive engagement' - similar to QtA. The control group had vocabulary instruction.
Try these: Beck’s (1998) assertion that phoneme to grapheme mapping is the equivalent to reading as dribbling skills are to basketball: necessary but not sufficient to play the game. The implication is clear: without sufficient phonic knowledge, reading may not be possible.
Daniels and Diack (1956) - to ignore the alphabet when teaching the decoding of English is inexplicable.
Whole word method is undermined by the capacity of humans to remember a limited number of symbols. Chinese children are expected to recognise only 3500 characters by the age eleven (Leong, 1973), and it takes twelve years of study to learn 2000 logographs in Japanese(Gough,2006).