"Today you hear people talking about 'AN #AI' or 'THE AI.' Even 15 years ago we would not have heard this; we just heard 'AI.'" @AlisonGopnik on the history of thought on the #intelligence (or lack thereof) of #simulacra, linked to the convincing foolery of "double-talk artists":
"We should think about these large #AI models as cultural technologies: tools that allow one generation of humans to learn from another & do this repeatedly over a long period of time. What are some examples?"
@AlisonGopnik "It's a category error to think these systems are intelligent agents. But the very thing that makes human beings distinct is this capacity to take information from other people...collective intelligence over history. It makes more sense to think of [AI] this way."
"From the very beginning, you get a series of norms, rules, & later laws about new cultural technologies. As each new cultural technology emerges you get new kinds of norms."
"Yes. That's not a joke. That's what I'm saying."
- Gopnik
Testing causal inference in children, @AlisonGopnik notes that "Four-year-olds are very good at over-riding a likely assumption with new evidence. Adults are not."
"What's the thing that #AI [researchers] think is the best system for #inference?" And how does it stack up against kids?
#ReinforcementLearning takes 100Ks of iterations to find optimal policies — they're searching a noisier possibility space than kids:
"Which one of these things should you choose if your hair is a mess? We asked children this question and we asked @OpenAI's #GPT3 [via] #DaVinci. Children were quite good at figuring out to use the fork. [#AI] 'failed significantly.'"
- @AlisonGopnik
"The puzzle of #innovation is the real tension between how much is [it] the result of generating new possibilities, and how much is the result of constraining the space of possibilities?"
- @AlisonGopnik (@UCBerkeley, SFI)
ICYMI, this week's SFI Seminar by Fractal Faculty Stuart Firestein (@Columbia) on "what started out ass a very simple-seeming problem [re: #olfaction] and turned out to be very complicated":
"Everything we know about the world comes through these little holes in our head and the skin covering our body, processed through tissue specialized to interpret it."
"The thing to notice about [sight and hearing] is that they're [processing] fairly low-dimensional stimuli."
"Even a simple smell is composed of a VARIETY of molecules, and these are high-dimensional from a chemical point of view. And it's also a somewhat discontinuous stimulus. How do we get from this bunch of molecules to this unitary perception of something like a rose?"
"A key feature of this is talk is that we make sense of what each other are saying IN PART by what they say, but ALSO by what we expect of them."
"Language transmits info against a background of expectations – syntactic, semantic, and this larger cultural spectrum. It's not just the choices of make but [how] we set ourselves up to make later choices."
"I think what really drives [the popularity of the #multiverse in #scifi] is regret... There's a line in @allatoncemovie where #MichelleYeoh is told she's the worst version of herself."
"I don't think we should resist melting brains. I think we should just bite the bullet."
"When you measure the spin of an electron, or the position...what happened to all of the other things you could have seen? Everett's idea is that they're all real. They all become real in that measurement."
- SFI Fractal Faculty @seanmcarroll at @guardian theguardian.com/science/audio/…
"At the level of the equations there is zero ambiguity, but the metaphors break down. The two universes it splits into aren't as big as the original universe. The thickness of the two new universes adds up to the thickness of the original universe."
"One way to represent the kind of #compositionality we want to do is with this kind of breakdown...eventually a kind of representation of a sentence. On the other hand, vector space models of #meaning or set-theoretical models put into a space have been very successful..."
"Humans are prone to giving machines ambiguous or mistaken instructions, and we want them to do what we mean, not what we say. To solve this problem we must find ways to align AI with human preferences, goals & values."
- @MelMitchell1 at @QuantaMagazine: quantamagazine.org/what-does-it-m…
“All that is needed to assure catastrophe is a highly competent machine combined with humans who have an imperfect ability to specify human preferences completely and correctly.”
"It’s a familiar trope in #ScienceFiction — humanity threatened by out-of-control machines who have misinterpreted human desires. Now a not-insubstantial segment of the #AI research community is concerned about this kind of scenario playing out in real life."
- @MelMitchell1