Jay McClelland on What's missing in Deep Learning crowdcast.io/e/learningsalo…
He argues against innate systematic generalization in humans and it is something that we acquire.
Thus he argues that to achieve systematic generalization we need to devise machines that learn how to do systematic generalization. That is, a meta-solution to the problem.
We need machines that exploit intuition:
Expertise is achieved through deliberate practice:
Humans learn to solve tasks by using intermediate goals:
He defines intuition as the following:
and next steps:
Concludes that humans seek explanations for this world.
Languages are not fully systematic. Exceptions arise in language because we use them too often. It is 'quasi-systematic'.
Why would language be quasi-systematic? Language ultimately reflects the structure of what we think about. It's never perfect but partially approximate.
It is not the language that gives us a systematic structure, it is the world. This is where Chomsky was wrong.
The attention-based architectures make what people and networks 'smart'. It is an architectural innovation that is not only tied to language. To behave in context-sensitive ways, we must be aware of context. It will continue to extend in all kinds of domains.
Explanations are understanding why you might do things. The problem with automation is that they understand only literal instruction but are unable to explain why. This is an essential capability required for AI that participates in the shared world of humans.
We have an incredible advantage in our ability to share our thoughts. This is despite the language being constrained in sequentially. Thus it is fundamentally about sharing.
Language forces us to categorize. We have a tendency to overuse categorization that straight jackets our thoughts. It's powerful and also misleading.
There is a tension where you overly use systematicity that you block the intuition. The process of writing down an idea may obscure the process of what lead to the idea.
What surprised me about the talk is that apparently, systematic reasoning isn't an ability that is available to most of the population. This has huge ramifications on political discourse!
Not only are we have intrinsic biases coming from the cultures we grew up in, but too many aren't able to parse the arguments that reveal a cognitive bias. This is quite a revelation!
I had thought that people just had biases (see: Haidt). Not only do they have biases, but they also cannot understand a certain level of argumentation! Humanity is certainly in deep trouble!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

15 Nov
Analysis of QAnon by a game designer. Everyone should read! medium.com/curiouserinsti…
QAnon method is like the movie Inception on a mass scale. Planting seeds of misinformation so that its victims generate their understanding of alternative reality for themselves.
The author concludes that this isn't a movement that grew organically, but rather one that is orchestrated with big money.
Read 24 tweets
14 Nov
Yesterday's Learning Salon with Gary Marcus. The last 30 minutes were excellent (after the guest left). The best conclusion: @blamlab AI is the subversive idea that cognitive psychology can be formalized.
crowdcast.io/e/learningsalo…
Important to realize that a description of a missing cognitive functionality does not have enough precision or leave enough hints on how this is implemented in the brain. Implementations in code do not imply how it is implemented in the brain.
Another distinction that is important that there is a disagreement on how to do research. The Deep Learning community has argued that we should not constrain ourselves with a-priori hypothesis that may be wrong. Let the learning system discover the algorithms.
Read 6 tweets
13 Nov
The more try to understand cognition, the more you realize how long the journey may be required to get human-like general intelligence.
Our frameworks of understanding cognition are getting better. However, one has to understand that cognition arises through emergence in complex adaptive systems. These systems are very difficult to set up and replicate.
To get an intuition of how large the gap truly is, one only needs to observe how awkward and non-organic in our present-day robots. Why can't they perform with the nimbleness of honey bees?
Read 5 tweets
13 Nov
The idealization of the ethnic peasantry as the one true national class is the generating condition that lead to genocides in Nazi Germany, Armenia, and Cambodia. It is fueled by the resentment of the elite as the root of their own misery.
We need to learn from history and ask why a country like Cambodia will put a quarter of its population to death only because they were experts in different crafts. en.wikipedia.org/wiki/Cambodian…
What collectively drives people to kill people on a mass scale? What makes people ignore their natural empathy for others? It is the collective delusion that the existence of another is the reason for one's misery.
Read 19 tweets
12 Nov
Our brains see affordances, not because we see 3D objects, but rather how we move to see our 3D worlds. This is different from inverse graphics.
Inverse graphics implies that a representation of a 3D object is created in our minds. If we assume the 'lazy brain hypothesis' then this doesn't make sense because it is wasteful computation.
When we look at a Rubik's cube, we don't instantly know where which colors are on which sides. We are aware of the layout only when we attend to it. It is an active form of perception rather than a photographic snapshot.
Read 5 tweets
5 Nov
Modularity is critically important for AGI, but we should avoid a naive formulation of modularity.
The most developed notion of modularity comes from computer science. We have notions of encapsulation and all kinds of composition design patterns that are strategically employed to trade-off one concern over another.
Modularity thus is understood as a controlled coordination mechanism between interacting parts. Certain information is allowed to be malleable while other kinds must remain immutable.
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!