In this episode, @JohnCLangford proposes Reinforcement Learning to be essential to intelligence. An ambiguous statement however since RL isn't precisely defined by him in the talk.
This is opposition to @ylecun icing on the cake analogy. @KordingLab chimed in with an excellent argument against the cake analogy. He insightfully proposes however the immense capability of evolution to absorb information about causality.
@JohnCLangford throws cold water on evolutionary algorithms on the argument of its wastefulness and the pragmatic reality that it requires too much compute resources. It's unclear if he has an overly simplified view of evolution.
John Krakauer @blamlab questions if it is correct to conflate the process that builds something and the functionality of that something. Questions if evolution and RL are even the same thing. Again, a problem of lack of definition of error-correction algorithms.
Krakauer @blamlab further argues the disconnect between something that is learned through gradual methods (equated to procedural) and that learned through declarative methods. The latter is something humans are capable of.
@shengokai argues however that every declarative statement must be grounded in procedural thought. One cannot explain declarative shortcuts on how to ride a motorcycle or swing a sword without prior experience.
@neuro_data
also makes the observation the evolution doesn't have goals unlike RL methods that require goals. @shengokai
asks what's defining the rewards for an RL algorithm?
So some open questions left that may all be related. How does an error-correcting gradual algorithm lead to propositional cognition? Why does goal-less open-ended algorithms lead to goal based gradual algorithms?
I've updated my big picture:
• • •
Missing some Tweet in this thread? You can try to
force a refresh
John Krakauer in a recent Learning Salon conversation focused on the huge gap between participatory learning and propositional learning. It occurred to me that propositional learning is a kind of hypnosis!
Coincidentally, today's currents events are a consequence of hypnosis. @ScottAdamsSays was the first to notice Trump's apparent use of hypnosis methods. fortune.com/2020/09/27/don…
Both evolution and the brain are massively parallel discovery processes. But what is the difference between the two?
As a model to understand evolution, let's take the super organism known as bacteria and its adversarial viruses. This process involves horizontal gene transfer and endosymbiosis. Often overlooked by many models of evolution that confine themselves only to mutation.
In an abstract sense, the 3 mechanisms of evolution to drive innovation involves: chance (i.e. mutation), local information propagation (i.e. HGT), information reuse (i.e. Endosymbiosis). What are the equivalences for this in brains?
What is the difference between these verb pairs? Hearing-listening, touching-feeling, thinking-understanding, talking-explaining?
There's a difference between the verb we use that can only be understood by grounding in this world.
It tried to see what GPT3 understood about exploiting and exploring. Here is the association made by GPT3. Exploring->investigating,analyzing. Exploiting->caring,using,respecting,testing.
Here is @demishassabis who explains the importance of embodiment to AI. Deep Learning learns by interacting with its world as explained by many researchers in the sensorimotor field.
I'm constantly surprised that this sensorimotor or enactivist approach to understanding human cognition is a minority view in cognitive psychology and neuroscience communities.
In science, there is a constant struggle against the orthodoxy. Just as in chess and in go, higher exists higher-level abstractions that trump long-established practices in the past.
Why are the processes of biological cognition inseparable?
If we are to argue for anti-representation (see: Cisek's pragmatic representation or Brette's no-coding) then we should have an explanation of why cognition is non-separable.
Non-separable is a characteristic of a holistic system. This means that a process cannot be decomposed into subcomponent parts. Quantum mechanics @coecke can be framed as non-separability as a first principle.
It has been proposed that the brain deals with 4 kinds of semantics. Referential semantics, combinatorial semantics, emotional-affective semantics, and abstraction mechanisms. cell.com/trends/cogniti…
Bohm's Rheomode levate, vidate, dividate, reordinate which are abstract cognitive processes overlap but don't align with these semantics. Combinatorial and emotional-affective fits under levate. Referential and abstraction fits under reordinate.
There's a rough correspondence between Bohm's Rheomode and Peirce's triadic thinking: