Keynote 2 of day 2 at #LingCologne: @ozyurek_a (@GestureSignLab) on the integration of hand gestures in spoken language as multimodal communication.
Traditional approaches to language research have focused on the:
— spoken/written (not visual)
— arbitrary (not iconic)
— discrete/categorical
— unichannel (not multimodal)
Luckily, more recent work has broadened the perspective wrt the above points. #LingCologne
If gesture is only simulated action, gesturing should look the same regardless of one's language. However, the interface model predicts that gesture is integrated with language, such that properties of one's language will influence gesture strategies. #LingCologne
Looking at speech and gesture descriptions across languages, it was indeed found that properties of verb phrase structure influences the gesture strategies employed (including types of iconicity). Linguistic structure shapes iconicity even in blind gesturers! #LingCologne
Gestures are clearly enhancing communication. Differences seen in the amount of co-speech gestures during descriptions directed toward adults or children. #LingCologne
And the match or mis-match between speech and gestures may form a type of McGurk effect, affecting comprehension. #LingCologne
("Want a loo, sir?")
Concludes the talk by acknowledging all the collaborators of the @GestureSignLab at @Radboud_Uni and @MPI_NL. Definitely a lot of great work coming out of this research group! #LingCologne
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Last night I was playing a little with Openpose data in #RStats. I realized it's not too hard to wrangle the Openpose output and plot signing directly using #ggplot2 and #gganimate, like so:
But I decided to make some tweaks so you can change the color of the signer+clothes, which makes seeing the hands a bit easier (contrast!)...
But also, why not give your signer a pretty turtleneck to wear?
You guys know that IKEA products are basically just #Swedish words and place names, right? Walking around an IKEA store is like walking through a dictionary.
This is a script simulating the idea in Swedish and other places/languages: github.com/borstell/fakea
So you can now input a video and it outputs it slower and/or repeated. Here's an example of a sign for 'deaf' in STS rendered with a repeated 30% speed playback!
(Oh, and passed to the make_gif() function as well!)
And the automatic face blurring works great! Even with multiple people in the image (or, like here, multiple repetitions of the same person in one composite image)!
So, it's like *very* easy to process and reconstruct actual images with only a few lines of code. As in plotting software redrawing the image, pixel by pixel.
Here's is a gif of me made with #ggplot2 and #gganimate. Sunday = fun day!