"One way to represent the kind of #compositionality we want to do is with this kind of breakdown...eventually a kind of representation of a sentence. On the other hand, vector space models of #meaning or set-theoretical models put into a space have been very successful..."
"We can take the grammatical structure of the sentence and build up word representations... We take pre-group grammar and build up functional words by concatenating...and then cancelling out the types."
Alternatively, role-filler models view symbolic structure as a set of role and filler bindings, bringing together distinct vector spaces for each:
"One problem is that words have multiple meanings. If you want to work out the contextual meaning of a word you have to work out the contextual meaning of other words. It can become quite complex. We would like the ambiguity of words to resolve by using compositional approaches."
"If you have two different senses of a word — bed as in river and bed as in sleep — then you can represent it as a probabilistic mixture of these different senses."
"When you use these representations, what you find is that the mixed-ness, the measure of the ambiguity of the phrase, goes down in comparison to the ambiguity of the noun."
"So how do we build these things? I gave one way but it's kind of limited because you need word vectors."
Another neural approach:
"The task is to build representations of each sentence and then measure which ones are most similar to each other."
"In this work we created a data set that has specifically very metaphorical sentences, and ALL of the models found that harder."
"We feed images into an encoder, and then we train compositional image models to give us some vectors. The other possibilities are not in the image, so there isn't ambiguity."
"We give the system labels that are and are not in the image and it has to pull out the correct labels."
"The CLIP models and the role-filler models don't do very well. But type-logical models do — about 100% of the training set — but can't generalize... Even the compositional models are not fantastic."
1) Additional resources on whether LLMs can be trained for generalization.
2) "One thing I've been playing around with..."
3) "Actually, language is used in dialogue to describe things outside of the text. So how do we incorporate images into these compositional models?"
ICYMI, this week's SFI Seminar by Fractal Faculty Stuart Firestein (@Columbia) on "what started out ass a very simple-seeming problem [re: #olfaction] and turned out to be very complicated":
"Everything we know about the world comes through these little holes in our head and the skin covering our body, processed through tissue specialized to interpret it."
"The thing to notice about [sight and hearing] is that they're [processing] fairly low-dimensional stimuli."
"Even a simple smell is composed of a VARIETY of molecules, and these are high-dimensional from a chemical point of view. And it's also a somewhat discontinuous stimulus. How do we get from this bunch of molecules to this unitary perception of something like a rose?"
"A key feature of this is talk is that we make sense of what each other are saying IN PART by what they say, but ALSO by what we expect of them."
"Language transmits info against a background of expectations – syntactic, semantic, and this larger cultural spectrum. It's not just the choices of make but [how] we set ourselves up to make later choices."
"I think what really drives [the popularity of the #multiverse in #scifi] is regret... There's a line in @allatoncemovie where #MichelleYeoh is told she's the worst version of herself."
"I don't think we should resist melting brains. I think we should just bite the bullet."
"When you measure the spin of an electron, or the position...what happened to all of the other things you could have seen? Everett's idea is that they're all real. They all become real in that measurement."
- SFI Fractal Faculty @seanmcarroll at @guardian theguardian.com/science/audio/…
"At the level of the equations there is zero ambiguity, but the metaphors break down. The two universes it splits into aren't as big as the original universe. The thickness of the two new universes adds up to the thickness of the original universe."
"Humans are prone to giving machines ambiguous or mistaken instructions, and we want them to do what we mean, not what we say. To solve this problem we must find ways to align AI with human preferences, goals & values."
- @MelMitchell1 at @QuantaMagazine: quantamagazine.org/what-does-it-m…
“All that is needed to assure catastrophe is a highly competent machine combined with humans who have an imperfect ability to specify human preferences completely and correctly.”
"It’s a familiar trope in #ScienceFiction — humanity threatened by out-of-control machines who have misinterpreted human desires. Now a not-insubstantial segment of the #AI research community is concerned about this kind of scenario playing out in real life."
- @MelMitchell1