Deictic pointing gestures appear early in communication, reflect interactional skills and coordinated attention, and aid lexical acquisition. #LingCologne
Iconic gestures come later in the development. They are more complicated. What does the hand represent (object/handling) and from which perspective (observer/character). #LingCologne
Conventional gestures include headshakes and nods. Like deictic and iconic gestures, they convey and reinforce meaning. #LingCologne
Two views of gesture:
— It aids speech
— It is itself part of grammatical structure
Over time, there's synchronization happening in the child. Gesture and speech signals converge (co-occurring and prosodically aligning). #LingCologne
Interim summary: gesture and speech form an integrated system. #LingCologne
Moving on to interpersonal synchronization. Already at 3 months old, there is coordination of eye gaze. #LingCologne
Later, eye contact breaks more often, but joint attention interacts with vocabulary and deictic gestures in intersubjective communication. #LingCologne
Gesture matters to the speaker:
— engages motor system
— activates and manipulates spatio-motoric information for speaking and thinking #LingCologne
Gesture matters to the listener:
— listeners extract info from gestures and adjust their communication
— caregivers adjust to their children's linguistic skills #LingCologne
Last night I was playing a little with Openpose data in #RStats. I realized it's not too hard to wrangle the Openpose output and plot signing directly using #ggplot2 and #gganimate, like so:
But I decided to make some tweaks so you can change the color of the signer+clothes, which makes seeing the hands a bit easier (contrast!)...
But also, why not give your signer a pretty turtleneck to wear?
You guys know that IKEA products are basically just #Swedish words and place names, right? Walking around an IKEA store is like walking through a dictionary.
This is a script simulating the idea in Swedish and other places/languages: github.com/borstell/fakea
So you can now input a video and it outputs it slower and/or repeated. Here's an example of a sign for 'deaf' in STS rendered with a repeated 30% speed playback!
(Oh, and passed to the make_gif() function as well!)
And the automatic face blurring works great! Even with multiple people in the image (or, like here, multiple repetitions of the same person in one composite image)!
So, it's like *very* easy to process and reconstruct actual images with only a few lines of code. As in plotting software redrawing the image, pixel by pixel.
Here's is a gif of me made with #ggplot2 and #gganimate. Sunday = fun day!