As we reflect on 2019 and decide on our research directions for 2020, here's a summary of my recent reading. The #context tag tells you *why* I ended up there (thx to @julianharris for the tag suggestion ;)). I'm curious to know what other people's journeys have been this year...
Reading list (1) Language evolution papers. #context I am still trying to figure out what exactly language is for. We've seen some cool work on meaning shift in the last years. But *why* do meanings shift? And what does it tell us about the role of language in human behaviour?
Reading list (2) Papers on aligning vector spaces. #context the whole "let's average the utterance of thousands of speakers in a big corpus and pretend it's language" is rather tedious. My semantic space in not your semantic space. But how best to measure such differences?
Reading list (3) The 'historical' papers of @bhpartee. #context I was raised thinking the world was divided into Chomskians and anti-Chomskians, cognitivists and formalists, innatists and emergentists. And all of this is in fact terribly entangled and I want to understand how.
Reading list (4) Neuroscience / engineering papers on the olfactory system of the fruit fly. #context small organisms have beautiful ways to solve complex problems with a few neurons and feedforward architectures. And I think small AI is the future, scientifically and ethically.
Reading list (5) Psychology papers on mental simulation. #context I don't know how / whether that work overlaps with some of what philosophy and formal semantics have said about possible worlds. Still muddling through...
Reading list (6) Pop neuroscience and pop physics books on the notion of time. #context I'd like to finally understand what probabilities *are*. Damn it.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today is my last day in a university position so I wanted to write a few things about #leavingacademia⬇️
On NLP, Computational Linguistics and Meaning. Above all: on loving your object of study and owning it. \1
I don't often post here. I’m not a great talker. Which is a real shame because my research field loves talking on social media. About big engineering feats, conference acceptance rates, one's own success or struggles. Although, I note, relatively little about language itself. \2
At the risk of sounding sentimental, I became a computational semanticist because I cared about meaning. Deeply. I wanted to know what it was, the same way others want to know about beetles, asteroids or gravitation. I wanted to do science. \3
Here's some flattened portion of semantic space. What do you call meaning? a) the labeled points; b) the black void around them? /1
It seems to me we often think of the labeled vectors as meanings. But of course, the label is just a name for a position in space. A word may vary its position depending on speaker, on the time at which the vector was constructed (diachronic change), on the age of the speaker. /2
Perhaps more interestingly, modality will move concepts in space. Instances of a kind may be one way, but could or should be another way. So would it be better to see a space as a superposition of possible worlds? And meaning as pervasive in that space? /3
When you hear 'the batter ran to the ball', what do you imagine? A batter? A ball? But perhaps also a pitch, an audience, the sun and a cap on the head of the batter.
What do you imagine when you hear 'the duchess drove to the ball'? /2
We propose a computational model which takes a (simple) sentence and builds a conceptual description for it. In the process of doing so, it captures appropriate word senses or other lexical meaning variations. E.g. that the batter's ball is not a dancing event. /3
Talking about #compositionality opens a giant can of worms. One worm: what is it that we compose and where does it come from? What is it that does composition, and where does it come from? I've tried to put some thoughts together on #innateness. /1
This thread has three parts: a) how some approaches to compositionality deal with innateness; b) why we should think about innateness in relation to computational models; c) why we should think about innateness in relation to data. /2
PART 1 - HISTORY. I previously mentioned that different approaches to composition have different relations to innateness. Let's get back to some obvious protagonists -- Chomsky, Katz & Fodor... And let's start with a turning point in the historical debate on innateness... /3
Here's a thread surveying some 'classic' work on #compositionality. Lots of people seem to be discussing this right now, but with partial references to the whole story. My aim is to highlight some of the philosophical and psychological issues in the history of the concept. 1/
Small recap first... There are two principles usually associated with #compositionality, both (possibly incorrectly) attributed to Frege. See Pelletier's "Did Frege believe in Frege's principle?" (2001). 2/
1) Bottom-up, the 'compositionality principle': "... an important general principle which we shall discuss later under the name Frege's Principle, that the meaning of the whole sentence is a function of the meanings of its parts.'" Cresswell (1973) 3/
I personally find it extremely hard to recommend papers to others because the papers I most cherish are not necessarily relevant to the world at large, and they might not even be what your standard reviewer would consider 'a good paper'.
I cherish papers that connect things I'm thinking about. They're puzzle pieces that fit my own puzzle and may well be totally useless to someone else's puzzle. And sometimes it's not about the whole paper. The missing puzzle piece may be in a single paragraph or a footnote.