Today's SFI Seminar by Ext Prof @ricard_sole, streaming now — follow this 🧵 for highlights:
"Why #brains? Brains are very costly...it seems like they are not a very good idea to bring complex cognition to a #biosphere that just needs simple replicators."
"I also want to explore the problem of #consciousness, which is around all the time..."
--> "Replay the tape with different results vs. 'the logic of monsters' & 'life's solution'"
"We can use evolutionary robotics to evolve language...in principle we can evolve VERY different things. #SyntheticBiology...we have a wet lab, which sometimes I think was a mistake. But we can build alternatives that neither biology nor engineering has considered."
"When you think about the selective driver responsible for building brains, think about movement through an environment. Having a central system that integrates the information is really helpful."
Re: #SolidBrains and #GeneticAlgorithms, @ricard_sole outlines a history stretching back into the work of W. Pitts, J. J. Hopfield, S. A. Kauffman, and others to examine the origin of attractor neural network models — a.k.a., "An EXTREME simplification of the reality."
1) Thinking about #HebbsRule and memories as stored in minima on a hyperdimensional attractor basin.
"In natural systems you don't see colonies of ants made of individuals with HUGE brains. In @StarTrek there is this idea of #TheBorg, where the queen is very interesting with a large brain, but you don't see this. There is a tradeoff [between individual & collective cognition]."
1) "Swarms of birds are VERY close to a critical state."
2, 3) "In ant colonies, ants synchronize by touching each other...you have synchronization that is not very periodic."
4) "In information transfer between individuals, you have a peak...there's a critical density."
"Are plants intelligent? Do they have complex cognition? There's a lot of response, integration with the environment, that has to do with changing shapes...morphology is very important. But their memory is completely inaccessible to the plant."
Exploring "cognition space":
How do we pick the right axes?
How do we measure those variables?
Is development part of the story of building the system?
Are there empty spaces — voids in the morphospace?
"There may be fundamental physical constraints. But MAYBE..."
"#Birds have very different #brains than ours. But they do very similar things."
"In The Cave of Hands, it's only left hands, because they were using the right [to blow pigment]."
"What makes us different?"
"No machine today manages time."
"You can see a kind of proto-language emerging in #robotics...you have two embodied agents programmed to look around and invent words to refer to objects or actions. Everything is complete with a rule that makes the agents agree about the words."
"To what extent should we have evolutionary rules that build the minds that we know as part of the story — as part of the preconditions for intelligence?"
"@kevin2kelly makes the point that #AI will have intelligences that have nothing to do with ours..."
"...but when you build artificial #systems, all the time you are taking inspiration from #biology. I haven't seen anything that is really bizarre, that escapes from that."
"When it comes to #ExtendedMind(s) we think that humans are kind of outliers...you have things like #spiders who use their webs for #cognition, but [when it comes to a human-level extended phenotype], how do you 'jump' into that?"
ICYMI, this week's SFI Seminar by Fractal Faculty Stuart Firestein (@Columbia) on "what started out ass a very simple-seeming problem [re: #olfaction] and turned out to be very complicated":
"Everything we know about the world comes through these little holes in our head and the skin covering our body, processed through tissue specialized to interpret it."
"The thing to notice about [sight and hearing] is that they're [processing] fairly low-dimensional stimuli."
"Even a simple smell is composed of a VARIETY of molecules, and these are high-dimensional from a chemical point of view. And it's also a somewhat discontinuous stimulus. How do we get from this bunch of molecules to this unitary perception of something like a rose?"
"A key feature of this is talk is that we make sense of what each other are saying IN PART by what they say, but ALSO by what we expect of them."
"Language transmits info against a background of expectations – syntactic, semantic, and this larger cultural spectrum. It's not just the choices of make but [how] we set ourselves up to make later choices."
"I think what really drives [the popularity of the #multiverse in #scifi] is regret... There's a line in @allatoncemovie where #MichelleYeoh is told she's the worst version of herself."
"I don't think we should resist melting brains. I think we should just bite the bullet."
"When you measure the spin of an electron, or the position...what happened to all of the other things you could have seen? Everett's idea is that they're all real. They all become real in that measurement."
- SFI Fractal Faculty @seanmcarroll at @guardian theguardian.com/science/audio/…
"At the level of the equations there is zero ambiguity, but the metaphors break down. The two universes it splits into aren't as big as the original universe. The thickness of the two new universes adds up to the thickness of the original universe."
"One way to represent the kind of #compositionality we want to do is with this kind of breakdown...eventually a kind of representation of a sentence. On the other hand, vector space models of #meaning or set-theoretical models put into a space have been very successful..."
"Humans are prone to giving machines ambiguous or mistaken instructions, and we want them to do what we mean, not what we say. To solve this problem we must find ways to align AI with human preferences, goals & values."
- @MelMitchell1 at @QuantaMagazine: quantamagazine.org/what-does-it-m…
“All that is needed to assure catastrophe is a highly competent machine combined with humans who have an imperfect ability to specify human preferences completely and correctly.”
"It’s a familiar trope in #ScienceFiction — humanity threatened by out-of-control machines who have misinterpreted human desires. Now a not-insubstantial segment of the #AI research community is concerned about this kind of scenario playing out in real life."
- @MelMitchell1