It is patently absurd that so many researchers in cognitive science believe in a magical local rule (i.e. Hebbian learning, gradient descent, free energy, neural circuitry, etc) that leads to complex intelligence. I guess humans just never get tired of their reductionist methods.
But yet we have made tremendous strides using just local rules like gradient descent and massive computation to emulate intuition found in human minds.
Deep Learning methods appear like a bottom-up process, but it's actually a top-down process where an observational error is propagated down the layers perturbing weights along the way. An observational error can only be identified at the top.
Wiener's Cybernetics was one of the early formulations that focused on the importance of this feedback loop. Any system that adapts to an environment requires feedback. It is the feedback that is fundamental and not the local rule.
What's the difference between the notion of feedback and the notion of gradient descent? The former is a first principle, the latter is a specific implementation detail. Yet our cognitive biases gravitate to specifics rather than generalities.
Humans just love narratives that identify specifics. We readily consume the narrative that the dopamine neurotransmitter is the cause for all our addictive behavior and thus humans can be systematically manipulated.
I don't disagree that humans can be systematically manipulated. What I disagree with are the pervasive narratives that appeal to our reductionist bias. Yes, we are indeed being manipulated to favor stories about specific things being causes.
The problem is that these reductionist explanations (albeit easy to digest) gloss over inherent complexity. There's a lot more to achieve artificial intelligence than gradient-descent or rewards.
There's a lot more to intelligence than the notion of compressing knowledge. Yet we all love these placeholder symbols because it simplifies our thought processes.
This is the reason why humans have always invented gods. We invent proxy ideas to avoid thinking about the hard problems. Even worse, we invent hard problems to think about things that don't exist.
Hence we invent the word qualia and the hard problem of consciousness was invented. The truth of the matter is that the symbols that we invent become the objectives of the intellectual games that we play.
Wittgenstein was correct “Philosophy is a battle against the bewitchment of our intelligence by means of language.”
To understand the flaw in the arguments of others is to recognize what they are ignoring when they use symbols to explain things. Symbols absent of grounding have no meaning, yet many employ symbols absent of any meaning.
We become trapped in our symbols. To quote Grothendieck "Discovery is a child’s privilege. He ignores the silent and flawless consensus that is part of the air we breathe – the consensus of all the people who are, or are reputed to be, reasonable.”
The reason AGI is very hard is that many of us are trapped using the wrong language. Many of us do not understand the fallacy of our expressions and fail to recognize the meaninglessness of our words.
We are confused by the language we use to explain words like 'meaning', 'understanding', 'knowledge', and 'intelligence'. We go about using these words without realizing that we don't know what they mean.
We only are aware of the folksy use of these words. We know of only their meaning it their use in everyday living. But understanding these words for the purpose of understanding human cognition requires a lot more intellectual machinery.
To understand physics you need to invent a lot of new language (see: calculus for Newton). It should be no surprise that understanding cognition requires the invention of a new language.
The problem with our natural language is that it is noun-centric. Unfortunately, all of reality is process-centric. Thus, when we stick exclusively to a noun-centric language, we cannot properly map the complexity of reality.
But all hope is not lost. We are at a cusp of a new understanding of cognition. We are very close to demolishing the symbol grounding problem.
I'm composing this tweetstorm from a tweetstorm of a dream I had. It begins with the idea that Ptolemy's model of the movement of the planets was extremely accurate.
Ptolemy's model was accurate enough to be very useful for navigators of their time. But it worked well because it was finely tuned to fit with observed experimental data.
But was wrong with Ptolemy's model is that it did not correctly capture cause and effect. The earth and the planets revolve around the sun due to gravity and not everything revolves around the earth. This was the Copernicus model which he paid gravely for proposing.
It occurs to me that modern society has led to the perspective that we have immense control of our lives. This was not always true in the past where people could die for many reasons out of their control.
The modern understanding of the word 'tragedy' is that it when someone suffers for something that they could have avoided entirely. That there exists this means of control one's destiny and that it was ignored.
We see this play out on a mass scale with our actions in the pandemic (facemasks and vaccinations) and our absence of risk mitigation against climate change. But we remain utterly perplexed as to why people can't see the tragedy that is happening in slow motion?
It is a surprise to many that the math used in physics is a weird kind of handwavy math.
"Quantum field theory is mathematics that has not yet been invented by mathematicians." quantamagazine.org/the-mystery-at…
The math in physics is not as rigorous as found in math but it works with extreme accuracy! Maps (i.e. models) are not the territory, but you want your maps to accurately represent the territory.
As Feynman has said "It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong."
It's becoming depressingly obvious that we have to depend on stupid people to do the right thing (see: vaccination and climate change).
We've been under the false assumption that stupid people can be convinced by good arguments and reason. You will have to throw away that assumption forever!
Explain to me why the CDC has masks-wearing guidelines that appeal only to smart people? The damn problem with smart people is that they assume most people are smart enough to understand good reasoning. Wrong!!!