Up and at it again, after a short lunch break. @katerowley0 on visual word recognition in deaf readers. #LingCologne
How do deaf readers connect phonology, orthography, and semantics (since phonology is not directly available)? #LingCologne
In a lexical identification task, deaf and hearing readers had same reaction time, but deaf readers were more accurate. #LingCologne
In a second experiment, deaf and hearing readers alike were affected by pseudohomophones (<brane> ~ /brein/ ~ <brain>), suggesting phonological processing of orthographic forms. #LingCologne
Whereas both deaf and hearing readers do phonological processing when reading, only hearing readers also react to semantic distractors. For hearing readers, phonological activation is automatic — in deaf readers it is not. #LingCologne
But we normally read sentences, not individual words. Another experiment looked at this, using target vs. phonological, orthographic, and unrelated distractors:
"She decided to cut her {hair, hare, hail, vest} before the wedding." #LingCologne
Orthographic previews are beneficial for deaf and hearing readers alike, but only hearing readers are affected by phonological preview. #LingCologne
Overall conclusion is that deaf readers are less concerned with phonological processing. They can use it (when forced), but it is not automatic as in hearing readers. Possibly lip reading is one way deaf individuals have accessed phonology originally. #LingCologne
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Last night I was playing a little with Openpose data in #RStats. I realized it's not too hard to wrangle the Openpose output and plot signing directly using #ggplot2 and #gganimate, like so:
But I decided to make some tweaks so you can change the color of the signer+clothes, which makes seeing the hands a bit easier (contrast!)...
But also, why not give your signer a pretty turtleneck to wear?
You guys know that IKEA products are basically just #Swedish words and place names, right? Walking around an IKEA store is like walking through a dictionary.
This is a script simulating the idea in Swedish and other places/languages: github.com/borstell/fakea
So you can now input a video and it outputs it slower and/or repeated. Here's an example of a sign for 'deaf' in STS rendered with a repeated 30% speed playback!
(Oh, and passed to the make_gif() function as well!)
And the automatic face blurring works great! Even with multiple people in the image (or, like here, multiple repetitions of the same person in one composite image)!
So, it's like *very* easy to process and reconstruct actual images with only a few lines of code. As in plotting software redrawing the image, pixel by pixel.
Here's is a gif of me made with #ggplot2 and #gganimate. Sunday = fun day!