A very interesting story here is Ardavan being involved in language documentation as an informant at first, but becoming more and more involved, gaining experience and skills along the way (by interaction, collaboration, sharing), ending up a researcher! #wfd2019
Involving the #Deaf community and individual participants is crucial. Communication, information, consent. This enriches the collaboration between researchers and informants and gives agency and empowerment to language users! #wfd2019
Confession: When I first started researching #signlanguages, I was a terrible signer, slightly uncomfortable in signing environments, and quite bad at informing and involving Deaf participants and even colleagues...
... But I was fortunate enough to be working at @TspLingSU, which is a Deaf-led Deaf-majority research and teaching group, with colleagues who were supportive and gave me room and experience to improve both signing and research outreach...
... There's still a lot more improvement needed in my case, which is apparent from this presentation, but I've been encouraged (and accepted) to give signed presentations of my research (in both 🇸🇪 and 🇳🇱), and have written popsci summaries of my work (in Deaf and general pubs).
I want to be an ally and do the best I can in this field, so I am grateful to my Deaf friends and colleagues for support and encouragement, but also welcome criticism (when needed). ❤️🤟
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Last night I was playing a little with Openpose data in #RStats. I realized it's not too hard to wrangle the Openpose output and plot signing directly using #ggplot2 and #gganimate, like so:
But I decided to make some tweaks so you can change the color of the signer+clothes, which makes seeing the hands a bit easier (contrast!)...
But also, why not give your signer a pretty turtleneck to wear?
You guys know that IKEA products are basically just #Swedish words and place names, right? Walking around an IKEA store is like walking through a dictionary.
This is a script simulating the idea in Swedish and other places/languages: github.com/borstell/fakea
So you can now input a video and it outputs it slower and/or repeated. Here's an example of a sign for 'deaf' in STS rendered with a repeated 30% speed playback!
(Oh, and passed to the make_gif() function as well!)
And the automatic face blurring works great! Even with multiple people in the image (or, like here, multiple repetitions of the same person in one composite image)!
So, it's like *very* easy to process and reconstruct actual images with only a few lines of code. As in plotting software redrawing the image, pixel by pixel.
Here's is a gif of me made with #ggplot2 and #gganimate. Sunday = fun day!