A fantastic talk by Prof. @gemmaboleda of @putaupf, on #colexification at @abralin_oficial covering almost anything discussed in her paper jointly by Prof. Thomas Brochhagen also of @putaupf (Link to the paper and the author's webpages can be found in the comments). 1>11 🧵
"Here is summary of what the talk and the corresponding paper are all about:
"When do languages use the same word for different meanings? The #Goldilocks#Principle in the #lexicon 2>11
It is common for languages to express multiple meanings with the same word, a phenomenon known as "#colexification".For instance, the meanings FINGER and TOE #colexify in the word 'dit' in Catalan (the word 'dit' expresses both meanings),while they do not colexify in English.3>11
Colexification has been suggested to follow universal constraints. In particular, previous work has shown that related meanings are more prone to #colexify. This tendency has been explained in terms of the cognitive pressure for simplicity,... 4>11
since expressing related meanings with the same word makes lexicons easier to learn and use. The present study examines the interplay between this pressure and a competing universal constraint, the #functional#pressure for languages to maximize #informativeness. 5>11
We hypothesize that colexification follows a #Goldilocks#principle: meanings are more likely to colexify if they are related (fostering #simplicity), but not so related as to become confusable and cause misunderstandings (#fostering informativeness). 6>11
We find support for this principle in data from over 1200 languages and 1400 meanings. Our results thus suggest that universal principles shape the lexicons of natural languages, and contribute to the growing body of evidence suggesting that languages evolve to strike a .. 7>11
balance between competing functional and cognitive pressures." (Courtesy of @AbraLin's YouTube channel, p.s. Hashtags are mine). 8>11
Finally, somebody in NLP started talking about structures! Semantic, syntactic, and their interplay.
If you're profoundly curious about the 'Semantic Structure in Deep Learning', this paper by Dr. Ellie Pavlick of @BrownUniversity might be a good fit for you. 1>3 🧵 #ai#nlp
Check out this wonder paper by Prof. @JudithTonhauser & Dr. Judith Degen. After reading it, I can say I recovered somehow from depression and got a bit more motivated. Tribute to both of these venerable scholars! 1/4 🧵
I'll put the link to their pages, in case you're interested in their research interests.
Link to the paper: ling.auf.net/lingbuzz/006771
2/4
What happened to stratificational (neuro-cognitive) linguistics? It had some drawbacks, but was (and still can be, imho) a very efficient approach to bridge the gap btw. the symbolic and connectionist approaches to language. Even Prof. Lamb's Lab, has not been updated 1/4
since 2010. Here are the links to his Lab and his most recent paper 'Linguistic structure: A plausible theory (2016)'. When it comes to the interfaces, the need for transduction (unlike computation) increases, since symbolic (and digital) items are to be encoded to fuzzy 2/4