Carlos E. Perez Profile picture
Follows you - Deep Learning with Complex Adaptive Systems https://t.co/d6VnvF0smP https://t.co/pZQhpIToQ9 #deeplearning #ai #machinelearning #nlp
Alan Laidlaw Profile picture 1 added to My Authors
17 Nov
Excellent paper from Google discussing the robustness of Deep Learning models when deployed in real domains. arxiv.org/abs/2011.03395
The issue is described as 'underspecification'. The analogy they make is linear equations with more unknowns than the number of equations. The excess freedom leads to differing behavior across networks trained on the same dataset.
This is one of the rare papers that has practical significance to the production deployment of deep learning. I've alluded to this problem previously with respect to physical simulations. medium.com/intuitionmachi…
Read 11 tweets
16 Nov
What is the difference between causation and causality? The former is a consequence of a generative model and the latter is a consequence of a descriptive model.
Causation is the emergent partial ordering induced by computation. Theoretical, all the characteristics of computation such as universality and the halting problem are inherited by the concept of causation.
Causality however is a different thing. It is a process that approximates the causal behavior of complex processes. Approximates in the sense that it describes the process. This is different from simulating the process which has its own intrinsic limitations (Church-Turing).
Read 16 tweets
15 Nov
Analysis of QAnon by a game designer. Everyone should read! medium.com/curiouserinsti…
QAnon method is like the movie Inception on a mass scale. Planting seeds of misinformation so that its victims generate their understanding of alternative reality for themselves.
The author concludes that this isn't a movement that grew organically, but rather one that is orchestrated with big money.
Read 24 tweets
14 Nov
Jay McClelland on What's missing in Deep Learning crowdcast.io/e/learningsalo…
He argues against innate systematic generalization in humans and it is something that we acquire.
Thus he argues that to achieve systematic generalization we need to devise machines that learn how to do systematic generalization. That is, a meta-solution to the problem.
Read 21 tweets
14 Nov
Yesterday's Learning Salon with Gary Marcus. The last 30 minutes were excellent (after the guest left). The best conclusion: @blamlab AI is the subversive idea that cognitive psychology can be formalized.
crowdcast.io/e/learningsalo…
Important to realize that a description of a missing cognitive functionality does not have enough precision or leave enough hints on how this is implemented in the brain. Implementations in code do not imply how it is implemented in the brain.
Another distinction that is important that there is a disagreement on how to do research. The Deep Learning community has argued that we should not constrain ourselves with a-priori hypothesis that may be wrong. Let the learning system discover the algorithms.
Read 6 tweets
13 Nov
The more try to understand cognition, the more you realize how long the journey may be required to get human-like general intelligence.
Our frameworks of understanding cognition are getting better. However, one has to understand that cognition arises through emergence in complex adaptive systems. These systems are very difficult to set up and replicate.
To get an intuition of how large the gap truly is, one only needs to observe how awkward and non-organic in our present-day robots. Why can't they perform with the nimbleness of honey bees?
Read 5 tweets
13 Nov
The idealization of the ethnic peasantry as the one true national class is the generating condition that lead to genocides in Nazi Germany, Armenia, and Cambodia. It is fueled by the resentment of the elite as the root of their own misery.
We need to learn from history and ask why a country like Cambodia will put a quarter of its population to death only because they were experts in different crafts. en.wikipedia.org/wiki/Cambodian…
What collectively drives people to kill people on a mass scale? What makes people ignore their natural empathy for others? It is the collective delusion that the existence of another is the reason for one's misery.
Read 19 tweets
12 Nov
Our brains see affordances, not because we see 3D objects, but rather how we move to see our 3D worlds. This is different from inverse graphics.
Inverse graphics implies that a representation of a 3D object is created in our minds. If we assume the 'lazy brain hypothesis' then this doesn't make sense because it is wasteful computation.
When we look at a Rubik's cube, we don't instantly know where which colors are on which sides. We are aware of the layout only when we attend to it. It is an active form of perception rather than a photographic snapshot.
Read 5 tweets
5 Nov
Modularity is critically important for AGI, but we should avoid a naive formulation of modularity.
The most developed notion of modularity comes from computer science. We have notions of encapsulation and all kinds of composition design patterns that are strategically employed to trade-off one concern over another.
Modularity thus is understood as a controlled coordination mechanism between interacting parts. Certain information is allowed to be malleable while other kinds must remain immutable.
Read 10 tweets
1 Nov
Netflix's Queen's Gambit must be the ultimate mathematical nerd series. One cannot underestimate the detail of this movie and how it takes you back to a different time.
The movie takes place in the decade of the 1960s. Where a young child mother dies and is sent to an orphanage. What's incredible is how this movie reveals the changes in 60s, the technology, architecture, interior decoration, music and fashion style.
But that's just a slice of the movie, it's about a gifted chess player. Few may have noticed that her natural mother wrote a Ph.D. thesis in group theory. This reveals her unique innate ability.
Read 13 tweets
30 Oct
Cognition is all about the regulation of selves and their intrinsic motivators.
Motivators are indexical signs that are significant for the regulation of a self-model.
A self-model does not remain static, but is in constant negotiation with its many motivators.
Read 13 tweets
29 Oct
Self is the process of identity.
Revised to capture Varela's self-referentiality->closure->autopoiesis->autonomy
Self leads to modularity. The Self with Others leads to conflict. Empathy leads to a resolution between the Self and Others.
Read 14 tweets
28 Oct
We use the word causality as a means of understanding cognition but we don't really understand its distinctions. Let's look at what C.S.Peirce had to say about causality.
What @yudapearl says is that to understand a system one needs to hypothesize a model of the system and then see how this model is in agreement. Statistics is just one of the methods of testing. But it's not how one formulates the original model.
Peirce called this cognitive capability to hypothesize about the world as Abduction. Bayes rule is in fact a kind of abduction. When Bayesians talk about formulating priors, they are actually implicitly talking about an impoverished form of abduction.
Read 16 tweets
26 Oct
(1) All technologies are combinations. Individual technologies are combined from components. (2) Each component of technology is itself in miniature technology. (3) All technologies harness and exploit some natural effect or phenomenon.
This is Brian Arthur's definition of technology in amazon.com/Nature-Technol…
His framework is general enough so that we can recognize things that we don't conventionally consider as technology. These include culture, human organizations, processes, language and biology.
Read 29 tweets
25 Oct
John Krakauer in a recent Learning Salon conversation focused on the huge gap between participatory learning and propositional learning. It occurred to me that propositional learning is a kind of hypnosis!
Coincidentally, today's currents events are a consequence of hypnosis. @scottadamssays was the first to notice Trump's apparent use of hypnosis methods. fortune.com/2020/09/27/don…
Hypnosis can be scientifically described as 'believed-in imagination'. asch.net/portals/0/jour…
Read 9 tweets
24 Oct
Both evolution and the brain are massively parallel discovery processes. But what is the difference between the two?
As a model to understand evolution, let's take the super organism known as bacteria and its adversarial viruses. This process involves horizontal gene transfer and endosymbiosis. Often overlooked by many models of evolution that confine themselves only to mutation.
In an abstract sense, the 3 mechanisms of evolution to drive innovation involves: chance (i.e. mutation), local information propagation (i.e. HGT), information reuse (i.e. Endosymbiosis). What are the equivalences for this in brains?
Read 9 tweets
24 Oct
Another excellent conversation from the Learning Salon. A lot of participants contributing towards a big picture.
In this episode, @JohnCLangford proposes Reinforcement Learning to be essential to intelligence. An ambiguous statement however since RL isn't precisely defined by him in the talk.
This is opposition to @ylecun icing on the cake analogy. @KordingLab chimed in with an excellent argument against the cake analogy. He insightfully proposes however the immense capability of evolution to absorb information about causality.
Read 10 tweets
23 Oct
What is the difference between these verb pairs? Hearing-listening, touching-feeling, thinking-understanding, talking-explaining?
There's a difference between the verb we use that can only be understood by grounding in this world.
It tried to see what GPT3 understood about exploiting and exploring. Here is the association made by GPT3. Exploring->investigating,analyzing. Exploiting->caring,using,respecting,testing.
Read 10 tweets
23 Oct
Here is @demishassabis who explains the importance of embodiment to AI. Deep Learning learns by interacting with its world as explained by many researchers in the sensorimotor field.
I'm constantly surprised that this sensorimotor or enactivist approach to understanding human cognition is a minority view in cognitive psychology and neuroscience communities.
In science, there is a constant struggle against the orthodoxy. Just as in chess and in go, higher exists higher-level abstractions that trump long-established practices in the past.
Read 4 tweets
22 Oct
Why are the processes of biological cognition inseparable?
If we are to argue for anti-representation (see: Cisek's pragmatic representation or Brette's no-coding) then we should have an explanation of why cognition is non-separable.
Non-separable is a characteristic of a holistic system. This means that a process cannot be decomposed into subcomponent parts. Quantum mechanics @coecke can be framed as non-separability as a first principle.
Read 18 tweets
21 Oct
It has been proposed that the brain deals with 4 kinds of semantics. Referential semantics, combinatorial semantics, emotional-affective semantics, and abstraction mechanisms. cell.com/trends/cogniti…
Bohm's Rheomode levate, vidate, dividate, reordinate which are abstract cognitive processes overlap but don't align with these semantics. Combinatorial and emotional-affective fits under levate. Referential and abstraction fits under reordinate.
There's a rough correspondence between Bohm's Rheomode and Peirce's triadic thinking:
Read 10 tweets