Another excellent conversation from the Learning Salon. A lot of participants contributing towards a big picture.
In this episode, @JohnCLangford proposes Reinforcement Learning to be essential to intelligence. An ambiguous statement however since RL isn't precisely defined by him in the talk.
This is opposition to @ylecun icing on the cake analogy. @KordingLab chimed in with an excellent argument against the cake analogy. He insightfully proposes however the immense capability of evolution to absorb information about causality.
@JohnCLangford throws cold water on evolutionary algorithms on the argument of its wastefulness and the pragmatic reality that it requires too much compute resources. It's unclear if he has an overly simplified view of evolution.
John Krakauer @blamlab questions if it is correct to conflate the process that builds something and the functionality of that something. Questions if evolution and RL are even the same thing. Again, a problem of lack of definition of error-correction algorithms.
Krakauer @blamlab further argues the disconnect between something that is learned through gradual methods (equated to procedural) and that learned through declarative methods. The latter is something humans are capable of.
@shengokai argues however that every declarative statement must be grounded in procedural thought. One cannot explain declarative shortcuts on how to ride a motorcycle or swing a sword without prior experience.
@neuro_data
also makes the observation the evolution doesn't have goals unlike RL methods that require goals.
@shengokai
asks what's defining the rewards for an RL algorithm?
So some open questions left that may all be related. How does an error-correcting gradual algorithm lead to propositional cognition? Why does goal-less open-ended algorithms lead to goal based gradual algorithms?
I've updated my big picture:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

25 Oct
John Krakauer in a recent Learning Salon conversation focused on the huge gap between participatory learning and propositional learning. It occurred to me that propositional learning is a kind of hypnosis!
Coincidentally, today's currents events are a consequence of hypnosis. @ScottAdamsSays was the first to notice Trump's apparent use of hypnosis methods. fortune.com/2020/09/27/don…
Hypnosis can be scientifically described as 'believed-in imagination'. asch.net/portals/0/jour…
Read 9 tweets
24 Oct
Both evolution and the brain are massively parallel discovery processes. But what is the difference between the two?
As a model to understand evolution, let's take the super organism known as bacteria and its adversarial viruses. This process involves horizontal gene transfer and endosymbiosis. Often overlooked by many models of evolution that confine themselves only to mutation.
In an abstract sense, the 3 mechanisms of evolution to drive innovation involves: chance (i.e. mutation), local information propagation (i.e. HGT), information reuse (i.e. Endosymbiosis). What are the equivalences for this in brains?
Read 9 tweets
23 Oct
What is the difference between these verb pairs? Hearing-listening, touching-feeling, thinking-understanding, talking-explaining?
There's a difference between the verb we use that can only be understood by grounding in this world.
It tried to see what GPT3 understood about exploiting and exploring. Here is the association made by GPT3. Exploring->investigating,analyzing. Exploiting->caring,using,respecting,testing.
Read 10 tweets
23 Oct
Here is @demishassabis who explains the importance of embodiment to AI. Deep Learning learns by interacting with its world as explained by many researchers in the sensorimotor field.
I'm constantly surprised that this sensorimotor or enactivist approach to understanding human cognition is a minority view in cognitive psychology and neuroscience communities.
In science, there is a constant struggle against the orthodoxy. Just as in chess and in go, higher exists higher-level abstractions that trump long-established practices in the past.
Read 4 tweets
22 Oct
Why are the processes of biological cognition inseparable?
If we are to argue for anti-representation (see: Cisek's pragmatic representation or Brette's no-coding) then we should have an explanation of why cognition is non-separable.
Non-separable is a characteristic of a holistic system. This means that a process cannot be decomposed into subcomponent parts. Quantum mechanics @coecke can be framed as non-separability as a first principle.
Read 18 tweets
21 Oct
It has been proposed that the brain deals with 4 kinds of semantics. Referential semantics, combinatorial semantics, emotional-affective semantics, and abstraction mechanisms. cell.com/trends/cogniti…
Bohm's Rheomode levate, vidate, dividate, reordinate which are abstract cognitive processes overlap but don't align with these semantics. Combinatorial and emotional-affective fits under levate. Referential and abstraction fits under reordinate.
There's a rough correspondence between Bohm's Rheomode and Peirce's triadic thinking:
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!