When you hear 'the batter ran to the ball', what do you imagine? A batter? A ball? But perhaps also a pitch, an audience, the sun and a cap on the head of the batter.
What do you imagine when you hear 'the duchess drove to the ball'? /2
We propose a computational model which takes a (simple) sentence and builds a conceptual description for it. In the process of doing so, it captures appropriate word senses or other lexical meaning variations. E.g. that the batter's ball is not a dancing event. /3
Simple example: hearing 'bat' in the context of 'vampire' should make you much more likely to understand that the speaker is talking about animals: /4
Hearing 'the vampire is eating' should activate the concept of an object (*what* is the vampire eating?) and that object might be more likely to be a blood orange than another vampire or a castle: /5
Sometimes, different interpretations compete with each other. Consider the sentence:
"The astronomer married the star."
What comes to your mind? The Hollywood star or the celestial object? Both? /6
To take care of this, the model generates not a single situation description but many, each one with its own probability. For some speaker, the description with the Hollywood star might be more likely than the one where the astronomer married Betelgeuse. Or vice-versa. /7
A situation description consists of:
* scenarios (at-the-restaurant, gothic-novel)
* concepts (champagne, vampire)
* individuals (this bottle of champagne, Dracula)
* features of individuals (having a cork, being pale)
* roles (being the agent of a drinking event) /8
Like this: /9
Technically speaking, the account is implemented as a probabilistic generative model. It takes the logical form of a sentence:
∃x,y [astronomer(x)∧star(y)∧marry(x,y)]
and generates the conceptual descriptions most likely to account for that logical form. /9
Today is my last day in a university position so I wanted to write a few things about #leavingacademia⬇️
On NLP, Computational Linguistics and Meaning. Above all: on loving your object of study and owning it. \1
I don't often post here. I’m not a great talker. Which is a real shame because my research field loves talking on social media. About big engineering feats, conference acceptance rates, one's own success or struggles. Although, I note, relatively little about language itself. \2
At the risk of sounding sentimental, I became a computational semanticist because I cared about meaning. Deeply. I wanted to know what it was, the same way others want to know about beetles, asteroids or gravitation. I wanted to do science. \3
Here's some flattened portion of semantic space. What do you call meaning? a) the labeled points; b) the black void around them? /1
It seems to me we often think of the labeled vectors as meanings. But of course, the label is just a name for a position in space. A word may vary its position depending on speaker, on the time at which the vector was constructed (diachronic change), on the age of the speaker. /2
Perhaps more interestingly, modality will move concepts in space. Instances of a kind may be one way, but could or should be another way. So would it be better to see a space as a superposition of possible worlds? And meaning as pervasive in that space? /3
Talking about #compositionality opens a giant can of worms. One worm: what is it that we compose and where does it come from? What is it that does composition, and where does it come from? I've tried to put some thoughts together on #innateness. /1
This thread has three parts: a) how some approaches to compositionality deal with innateness; b) why we should think about innateness in relation to computational models; c) why we should think about innateness in relation to data. /2
PART 1 - HISTORY. I previously mentioned that different approaches to composition have different relations to innateness. Let's get back to some obvious protagonists -- Chomsky, Katz & Fodor... And let's start with a turning point in the historical debate on innateness... /3
Here's a thread surveying some 'classic' work on #compositionality. Lots of people seem to be discussing this right now, but with partial references to the whole story. My aim is to highlight some of the philosophical and psychological issues in the history of the concept. 1/
Small recap first... There are two principles usually associated with #compositionality, both (possibly incorrectly) attributed to Frege. See Pelletier's "Did Frege believe in Frege's principle?" (2001). 2/
1) Bottom-up, the 'compositionality principle': "... an important general principle which we shall discuss later under the name Frege's Principle, that the meaning of the whole sentence is a function of the meanings of its parts.'" Cresswell (1973) 3/
As we reflect on 2019 and decide on our research directions for 2020, here's a summary of my recent reading. The #context tag tells you *why* I ended up there (thx to @julianharris for the tag suggestion ;)). I'm curious to know what other people's journeys have been this year...
Reading list (1) Language evolution papers. #context I am still trying to figure out what exactly language is for. We've seen some cool work on meaning shift in the last years. But *why* do meanings shift? And what does it tell us about the role of language in human behaviour?
Reading list (2) Papers on aligning vector spaces. #context the whole "let's average the utterance of thousands of speakers in a big corpus and pretend it's language" is rather tedious. My semantic space in not your semantic space. But how best to measure such differences?
I personally find it extremely hard to recommend papers to others because the papers I most cherish are not necessarily relevant to the world at large, and they might not even be what your standard reviewer would consider 'a good paper'.
I cherish papers that connect things I'm thinking about. They're puzzle pieces that fit my own puzzle and may well be totally useless to someone else's puzzle. And sometimes it's not about the whole paper. The missing puzzle piece may be in a single paragraph or a footnote.