To add to this, I find that a nice illustration of the way constructions and composition go hand in hand is child language acquisition. Acquisition mixes memorisation of constructions with abstraction processes, in a way that eventually accounts for productive syntax.
I personally got a lot from Tomasello's "Construction Grammar For Kids" (2006): it describes how kids learn context-dependent schemas and how distributional analysis over such schemas progressively contributes to more complex production of novel utterances.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today is my last day in a university position so I wanted to write a few things about #leavingacademia⬇️
On NLP, Computational Linguistics and Meaning. Above all: on loving your object of study and owning it. \1
I don't often post here. I’m not a great talker. Which is a real shame because my research field loves talking on social media. About big engineering feats, conference acceptance rates, one's own success or struggles. Although, I note, relatively little about language itself. \2
At the risk of sounding sentimental, I became a computational semanticist because I cared about meaning. Deeply. I wanted to know what it was, the same way others want to know about beetles, asteroids or gravitation. I wanted to do science. \3
Here's some flattened portion of semantic space. What do you call meaning? a) the labeled points; b) the black void around them? /1
It seems to me we often think of the labeled vectors as meanings. But of course, the label is just a name for a position in space. A word may vary its position depending on speaker, on the time at which the vector was constructed (diachronic change), on the age of the speaker. /2
Perhaps more interestingly, modality will move concepts in space. Instances of a kind may be one way, but could or should be another way. So would it be better to see a space as a superposition of possible worlds? And meaning as pervasive in that space? /3
When you hear 'the batter ran to the ball', what do you imagine? A batter? A ball? But perhaps also a pitch, an audience, the sun and a cap on the head of the batter.
What do you imagine when you hear 'the duchess drove to the ball'? /2
We propose a computational model which takes a (simple) sentence and builds a conceptual description for it. In the process of doing so, it captures appropriate word senses or other lexical meaning variations. E.g. that the batter's ball is not a dancing event. /3
Talking about #compositionality opens a giant can of worms. One worm: what is it that we compose and where does it come from? What is it that does composition, and where does it come from? I've tried to put some thoughts together on #innateness. /1
This thread has three parts: a) how some approaches to compositionality deal with innateness; b) why we should think about innateness in relation to computational models; c) why we should think about innateness in relation to data. /2
PART 1 - HISTORY. I previously mentioned that different approaches to composition have different relations to innateness. Let's get back to some obvious protagonists -- Chomsky, Katz & Fodor... And let's start with a turning point in the historical debate on innateness... /3
Here's a thread surveying some 'classic' work on #compositionality. Lots of people seem to be discussing this right now, but with partial references to the whole story. My aim is to highlight some of the philosophical and psychological issues in the history of the concept. 1/
Small recap first... There are two principles usually associated with #compositionality, both (possibly incorrectly) attributed to Frege. See Pelletier's "Did Frege believe in Frege's principle?" (2001). 2/
1) Bottom-up, the 'compositionality principle': "... an important general principle which we shall discuss later under the name Frege's Principle, that the meaning of the whole sentence is a function of the meanings of its parts.'" Cresswell (1973) 3/
As we reflect on 2019 and decide on our research directions for 2020, here's a summary of my recent reading. The #context tag tells you *why* I ended up there (thx to @julianharris for the tag suggestion ;)). I'm curious to know what other people's journeys have been this year...
Reading list (1) Language evolution papers. #context I am still trying to figure out what exactly language is for. We've seen some cool work on meaning shift in the last years. But *why* do meanings shift? And what does it tell us about the role of language in human behaviour?
Reading list (2) Papers on aligning vector spaces. #context the whole "let's average the utterance of thousands of speakers in a big corpus and pretend it's language" is rather tedious. My semantic space in not your semantic space. But how best to measure such differences?