The term linear in the traditions of mathematical programming and hence machine learning doesn't have the same meaning as linear in the tradition of physics. So when ML folks speak of non-linearity, it is not the same as non-linear that physicists speak of.
I read this latest article and it was obviously apparent. "They’re “linear” because the only allowable power is exactly 1 and graphs of solutions to the equations form planes." quantamagazine.org/new-algorithm-…
Dynamical equations in physics usually have the power of 2. A non-linear equation in physics is one that typically does not have a closed-form analytic solution. The dynamics of a non-linear system is that it feedbacks into itself.
So when ML folks speak of non-linearity, they mean something entirely different from what physicists mean. It does not mean that there is feedback in the system. Unfortunately, somewhere along the way people have conflated the two given the common use of the word non-linear.
To be clear, Machine Learning and thus Deep Learning draw their vocabulary from two fields. On is mathematical programming where you find optimization methods. The second, which is usually pigeoned holed, is probabilistic methods.
Mathematical programming ultimately is about constraint satisfaction. Probabilistic methods are methods to reason about aggregate phenomena. They aren't the same thing, but the discipline's tradition is that the metaphors used come from these two fields.
But when we speak of biological brains, there's an entirely different vocabulary. Also when we speak of the physics of non-linear systems and complexity, it's also a different vocabulary. So there's a lot of impedance mismatches in the conversation.
Adjacent fields like computer science, genetics, neuroscience, different kinds of psychologies (cognitive, ecological, enactive) all have different vocabularies. There are a lot of ideas that are out there that are overlooked because they are actually misunderstood by others.
I come from a tradition of physics and computer science. Mathematical programming isn't foreign, but probabilistic models of the kind that Bayesians speak about is foreign. Mathematically, there's a resemblance with Statistical Mechanics, but metaphorically it is different.
But when one encounters the horrific complexity that is intrinsic in biology, then even the powerful metaphors that exist in physics and computer science have their limitations. But a big mistake is to favor one kind of metaphor over another.
Norbert Weiner favored a physics metaphor. His cybernetic systems were analogues of dynamical systems. Then came AI, which took the opposite metaphor, that inspired by digital computers and mathematical logic.
But biology is neither a dynamical system or a digital system. It is in fact both! It is dynamic because it has to meet with the physical world. It is digital because this is what ensures its repeatability.
What I would like to point out that we have to be very careful about the metaphors that we employ to describe cognition. These metaphors are indoctrinated in us through years of practice in a specific tradition of study.
Wittgenstein said "Philosophy is a battle against the bewitchment of our intelligence by means of our language." We have the same problem when we attempt to express our understanding of biology and ultimately brains.
We don't understand general intelligence because of the limits of our language. “The limits of my language means the limits of my world.” according to Wittgenstein.
Wittgenstein introduced the 'language-turn' to philosophy. The language-turn was introduced into biology via biosemiotics. Recently, the language-turn has had an immense contribution to the field of Deep Learning in the impressive capabilities of language models like GPT-3.
Metaphors that are intuitive to grasp by humans are the kinds that we can assimilate via our daily experiences. The fortunate thing about language is that we are immersed in it and thus we have an intuitive understanding of its nuances.
Metaphors are like models. All metaphors are wrong, but some are more useful than others. I propose then that the metaphor of language be the primary metaphor used to understand general intelligence.
Oh... philosophers use the term 'linquistic-turn' rep.routledge.com/articles/thema… . Which is unfortunate since linguists ever since Chomsky have been using a bad metaphor for language!
In a previous life, I was involved in interoperable protocol design specifically in B2B protocols. I explored the question "what makes for frictionless protocols?"
This question is ultimately a question about language. How are humans able to coordinate by communicating in ambiguous ways? Chomsky's error was to straight jacket language into rigid grammar.
To further boost his argument, he hypothesized that humans have a genetic mutation that he called merged that gifted us with compositional language capabilities. That, we have special circuitry that allows us to do language.
Of course, Chomsky is wrong. What gives human language is power is in its ambiguity. Just as visual ambiguity can lead to different interpretations, language ambiguity also leads to different interpretations. The source of human creativity is because of this inconsistency.
The source of any innovation is what is lost in translation. This you might call spontaneous symmetry breaking. That is when a crystalized idea loses its rigidity and becomes more fluid to alternative interpretations.
Darwinian natural selection would not make progress in the absence of 'lost in translation' mechanisms.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

12 Mar
The fact that our civilization does not have a right to repair is all you need to know to realize the fundamental misalignment of the economic model and that of sustainability.
The kinds of human technologies that we build are incompatible with the biological world. They are incompatible with everything else that we build. We are in a constant march towards greater and greater incompatibility.
Gone are the days where you can fix things. The only recourse is to throw broken things away. That is because corporations are incentivized to manufacture cheaply rather than building things to last.
Read 15 tweets
9 Mar
The purpose of the brain is homeostasis. More specifically, a particular variant referred to by a lesser-known word "allostasis". Accepting this reveals all that is wrong with machine language approaches in modeling brains. Permit me to explain...
Allostasis proposes that efficient regulation depends on the anticipation of needs and preparation for their satisfaction. This is a more complex form of homeostasis, which is typically defined as maintaining a system within a narrow operating range.
The problem with machine learning approaches is that the formulation of the domain of stability is performed by a researcher who explicitly defines an objective function or in the RL paradigm a reward function.
Read 21 tweets
8 Mar
The brain's consensus algorithm demonstrated: Image
Permit me to explain why your brain is flipping its interpretation of the image. For starters, human vision acts very similarly to touch. medium.com/intuitionmachi…
When your eyes looks at an image, it is actually rapidly moving around and 'feeling' the image. The part of the eye that can see color and high resolution is just a small fraction of what you see in front of you.
Read 10 tweets
17 Feb
Thrilled today to have anticipated a @DeepMind position paper several years before it was pre-published. This is a hint that I may in fact be at the bleeding edge of understanding general intelligence. Here's the said paper: arxiv.org/abs/2102.03406
The key points of this paper are what the authors describe as symbolic fluency: receptive, constructive, embedded, malleable, separable, meaningful, and graded. Let me explore this in more detail to mine newer insights.
I don't have a need to regurgitate the motivations of the approach other than to say that it derives inspiration from Peirce's formulation of semiotics. medium.com/intuitionmachi…
Read 13 tweets
17 Feb
The folks at DeepMind have discovered C.S.Peirce and thus semiotics. Now I begin to worry. arxiv.org/abs/2102.03406 @santoroAI @DeepMind @AndrewLampinen @dnraposo
Quoted from the paper "Our definition of a symbol draws on the work of the philosopher Charles Sanders Peirce. Peirce outlined three categories of relation—icons, indices, and symbols—whose definitions illuminate the role of convention in establishing meaning."
Perhaps the authors got inspired by my blog post written in 2018. I do hope they continually get inspired by other blog posts on the same topic. medium.com/intuitionmachi…
Read 6 tweets
15 Feb
Nice to discover Judea Pearl ask a fundamental question. What's an 'inductive bias'?
I crucial step on the road towards AGI is a richer vocabulary for reasoning about inductive biases.
@yudapearl explores the apparent impedance mismatch between inductive biases and causal reasoning. But isn't the logical thinking required for good causal reasoning also not an inductive bias?
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!