"You choose some loss function...maybe I'm learning the wrong weights. So I define some goal and then I want to learn these weights, these thetas."
"The reason that one-layer #networks don't really work is that they can only learn linear functions. With multilayer neural networks, you can learn decision boundaries through #backpropagation...so it's a fundamental part of how we train machines, these days."
"The #brain learns [instead] by local #learning — instead of the error getting fed back through backpropagation, each #neuron does some kind of linear regression. It [consequently] works very fast. We have experimental evidence that the brain does something like this."
"For each branch I pick a random hyperplane and draw [it] somewhere in this square, and say, 'If this input falls on one side, the gate will be open, and if it falls on the other side, the gate will be closed."
"Each weight learns a different piecewise linear function, and then I aggregate as I go through the layers. This neuron is learning this section, this neuron is learning this section, and then the next layer is learning both sections."
2) "What do we want? Some desirable features of this model include that it is modeled on the #cerebellum. There isn't any ridiculous time delay due to forward and backward passes."
3) "Parallel fiber inputs go in through 'dendrites' and each branch has a gating key..."
"The fact that the gates are significantly more correlated through learning than the error signal validates our decision to use [this approach]."
1) On the desirable features for computational experiments exploring a cerebellar model for #MachineLearning
2) "If you keep your finger in front of you and you move your head, you'll notice your eyes fixate very well on your fingernail...unlike if you move your finger around."
"Both kinds of #NeuralNetworks learn this chaotic time series, but in different ways. DGNs learn this in very intuitive ways."
ICYMI, this week's SFI Seminar by Fractal Faculty Stuart Firestein (@Columbia) on "what started out ass a very simple-seeming problem [re: #olfaction] and turned out to be very complicated":
"Everything we know about the world comes through these little holes in our head and the skin covering our body, processed through tissue specialized to interpret it."
"The thing to notice about [sight and hearing] is that they're [processing] fairly low-dimensional stimuli."
"Even a simple smell is composed of a VARIETY of molecules, and these are high-dimensional from a chemical point of view. And it's also a somewhat discontinuous stimulus. How do we get from this bunch of molecules to this unitary perception of something like a rose?"
"A key feature of this is talk is that we make sense of what each other are saying IN PART by what they say, but ALSO by what we expect of them."
"Language transmits info against a background of expectations – syntactic, semantic, and this larger cultural spectrum. It's not just the choices of make but [how] we set ourselves up to make later choices."
"I think what really drives [the popularity of the #multiverse in #scifi] is regret... There's a line in @allatoncemovie where #MichelleYeoh is told she's the worst version of herself."
"I don't think we should resist melting brains. I think we should just bite the bullet."
"When you measure the spin of an electron, or the position...what happened to all of the other things you could have seen? Everett's idea is that they're all real. They all become real in that measurement."
- SFI Fractal Faculty @seanmcarroll at @guardian theguardian.com/science/audio/…
"At the level of the equations there is zero ambiguity, but the metaphors break down. The two universes it splits into aren't as big as the original universe. The thickness of the two new universes adds up to the thickness of the original universe."
"One way to represent the kind of #compositionality we want to do is with this kind of breakdown...eventually a kind of representation of a sentence. On the other hand, vector space models of #meaning or set-theoretical models put into a space have been very successful..."
"Humans are prone to giving machines ambiguous or mistaken instructions, and we want them to do what we mean, not what we say. To solve this problem we must find ways to align AI with human preferences, goals & values."
- @MelMitchell1 at @QuantaMagazine: quantamagazine.org/what-does-it-m…
“All that is needed to assure catastrophe is a highly competent machine combined with humans who have an imperfect ability to specify human preferences completely and correctly.”
"It’s a familiar trope in #ScienceFiction — humanity threatened by out-of-control machines who have misinterpreted human desires. Now a not-insubstantial segment of the #AI research community is concerned about this kind of scenario playing out in real life."
- @MelMitchell1