Very excited about Hope Morgan et al.'s talk on phonological (wiggle-fingers) complexity and frequency distribution! Relevant to much of my own work! #TISLR13
Phonological complexity can be defined by 1) markedness (frequency/economy) and 2) structure (quantity).
E.g.:
Some handshapes are easier (1). Two-handed signs with simultaneous movements have more structure (2). #TISLR13
Research question: is there a maximum limit of complexity that can be packed into a sign? #TISLR13
Research question: does phonological complexity change over time (language age as a factor)? #TISLR13
Research question: how does frequency relate to phonological complexity?
We already know that lexical frequency correlates inversely with sign duration (citing Börstell et al. 2016, 2019 😍) #TISLR13
A scoring matrix defining phonological complexity index. #TISLR13
Results: Generally signs are not very phonological complex (distribution highest in the lower range of the scale) #TISLR13
Results: the distribution across complexity scale correlates with age of the language. #signlanguages become more phonologically complex over time? #TISLR13
Plenty of future directions! Community size, neighborhood complexity, etc.
As for me (Calle), I'mma head over to Hope during the ☕ break and suggest a paper together. Hashtag networking hashtag I love all you TISLRs! #TISLR13
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Last night I was playing a little with Openpose data in #RStats. I realized it's not too hard to wrangle the Openpose output and plot signing directly using #ggplot2 and #gganimate, like so:
But I decided to make some tweaks so you can change the color of the signer+clothes, which makes seeing the hands a bit easier (contrast!)...
But also, why not give your signer a pretty turtleneck to wear?
You guys know that IKEA products are basically just #Swedish words and place names, right? Walking around an IKEA store is like walking through a dictionary.
This is a script simulating the idea in Swedish and other places/languages: github.com/borstell/fakea
So you can now input a video and it outputs it slower and/or repeated. Here's an example of a sign for 'deaf' in STS rendered with a repeated 30% speed playback!
(Oh, and passed to the make_gif() function as well!)
And the automatic face blurring works great! Even with multiple people in the image (or, like here, multiple repetitions of the same person in one composite image)!
So, it's like *very* easy to process and reconstruct actual images with only a few lines of code. As in plotting software redrawing the image, pixel by pixel.
Here's is a gif of me made with #ggplot2 and #gganimate. Sunday = fun day!