Aurelie Herbelot Profile picture
Sep 21, 2020 11 tweets 3 min read Read on X
New pre-print with Katrin Erk:

"How to marry a star? Probabilistic constraints for meaning in context."

A theoretical paper on evocation: how people ‘imagine' an entire situation from a few words, and how meaning is contextualised in the process.

arxiv.org/abs/2009.07936 /1
#lightreading summary ⬇️

When you hear 'the batter ran to the ball', what do you imagine? A batter? A ball? But perhaps also a pitch, an audience, the sun and a cap on the head of the batter.

What do you imagine when you hear 'the duchess drove to the ball'? /2
We propose a computational model which takes a (simple) sentence and builds a conceptual description for it. In the process of doing so, it captures appropriate word senses or other lexical meaning variations. E.g. that the batter's ball is not a dancing event. /3
Simple example: hearing 'bat' in the context of 'vampire' should make you much more likely to understand that the speaker is talking about animals: /4 Image
Hearing 'the vampire is eating' should activate the concept of an object (*what* is the vampire eating?) and that object might be more likely to be a blood orange than another vampire or a castle: /5 Image
Sometimes, different interpretations compete with each other. Consider the sentence:

"The astronomer married the star."

What comes to your mind? The Hollywood star or the celestial object? Both? /6
To take care of this, the model generates not a single situation description but many, each one with its own probability. For some speaker, the description with the Hollywood star might be more likely than the one where the astronomer married Betelgeuse. Or vice-versa. /7
A situation description consists of:
* scenarios (at-the-restaurant, gothic-novel)
* concepts (champagne, vampire)
* individuals (this bottle of champagne, Dracula)
* features of individuals (having a cork, being pale)
* roles (being the agent of a drinking event) /8
Like this: /9 Image
Technically speaking, the account is implemented as a probabilistic generative model. It takes the logical form of a sentence:

∃x,y [astronomer(x)∧star(y)∧marry(x,y)]

and generates the conceptual descriptions most likely to account for that logical form. /9
For those interested, please check out the pre-print 🙂arxiv.org/abs/2009.07936

A jupyter notebook with working examples: github.com/minimalparts/M…
/10

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Aurelie Herbelot

Aurelie Herbelot Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ah__cl

Mar 14, 2023
Today is my last day in a university position so I wanted to write a few things about #leavingacademia⬇️

On NLP, Computational Linguistics and Meaning. Above all: on loving your object of study and owning it. \1
I don't often post here. I’m not a great talker. Which is a real shame because my research field loves talking on social media. About big engineering feats, conference acceptance rates, one's own success or struggles. Although, I note, relatively little about language itself. \2
At the risk of sounding sentimental, I became a computational semanticist because I cared about meaning. Deeply. I wanted to know what it was, the same way others want to know about beetles, asteroids or gravitation. I wanted to do science. \3
Read 12 tweets
Jan 23, 2022
Here's some flattened portion of semantic space. What do you call meaning? a) the labeled points; b) the black void around them? /1
It seems to me we often think of the labeled vectors as meanings. But of course, the label is just a name for a position in space. A word may vary its position depending on speaker, on the time at which the vector was constructed (diachronic change), on the age of the speaker. /2
Perhaps more interestingly, modality will move concepts in space. Instances of a kind may be one way, but could or should be another way. So would it be better to see a space as a superposition of possible worlds? And meaning as pervasive in that space? /3
Read 4 tweets
Jan 18, 2020
Talking about #compositionality opens a giant can of worms. One worm: what is it that we compose and where does it come from? What is it that does composition, and where does it come from? I've tried to put some thoughts together on #innateness. /1

This thread has three parts: a) how some approaches to compositionality deal with innateness; b) why we should think about innateness in relation to computational models; c) why we should think about innateness in relation to data. /2
PART 1 - HISTORY. I previously mentioned that different approaches to composition have different relations to innateness. Let's get back to some obvious protagonists -- Chomsky, Katz & Fodor... And let's start with a turning point in the historical debate on innateness... /3
Read 32 tweets
Jan 11, 2020
Here's a thread surveying some 'classic' work on #compositionality. Lots of people seem to be discussing this right now, but with partial references to the whole story. My aim is to highlight some of the philosophical and psychological issues in the history of the concept. 1/
Small recap first... There are two principles usually associated with #compositionality, both (possibly incorrectly) attributed to Frege. See Pelletier's "Did Frege believe in Frege's principle?" (2001). 2/
1) Bottom-up, the 'compositionality principle': "... an important general principle which we shall discuss later under the name Frege's Principle, that the meaning of the whole sentence is a function of the meanings of its parts.'" Cresswell (1973) 3/
Read 32 tweets
Dec 20, 2019
As we reflect on 2019 and decide on our research directions for 2020, here's a summary of my recent reading. The #context tag tells you *why* I ended up there (thx to @julianharris for the tag suggestion ;)). I'm curious to know what other people's journeys have been this year...
Reading list (1) Language evolution papers. #context I am still trying to figure out what exactly language is for. We've seen some cool work on meaning shift in the last years. But *why* do meanings shift? And what does it tell us about the role of language in human behaviour?
Reading list (2) Papers on aligning vector spaces. #context the whole "let's average the utterance of thousands of speakers in a big corpus and pretend it's language" is rather tedious. My semantic space in not your semantic space. But how best to measure such differences?
Read 7 tweets
Dec 14, 2019
This thread made me think. Not about *what* we read but about *why* we read.
I personally find it extremely hard to recommend papers to others because the papers I most cherish are not necessarily relevant to the world at large, and they might not even be what your standard reviewer would consider 'a good paper'.
I cherish papers that connect things I'm thinking about. They're puzzle pieces that fit my own puzzle and may well be totally useless to someone else's puzzle. And sometimes it's not about the whole paper. The missing puzzle piece may be in a single paragraph or a footnote.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(