Gibson came up with the word affordance. It's derived from the verb 'afford'. I've always liked the term since it implies the recognition of possibilities. en.wikipedia.org/wiki/Affordance
There's a problem though with his method. He took a verb and created a noun. He should have listened to David Bohm who realized that our noun-centric language could be restricting our ability to understand the world. He called his verb-centric language rheomode.
Paul Cisek decided he had enough with the conventional taxonomy of cognition (i.e. input, output, cognition) and decided on a new taxonomy.
The first thing to notice about this is that all behavior is rooted in homeostasis. Damasio argues that the purpose of the brain is homeostasis.
The second thing to notice is under interaction (i.e. doing) there is selection and specification. Recall that in my post about the meaning of computation, that computation is the interplay between intention and mechanism. medium.com/intuitionmachi…
So what happens in the cerebral cortex is... well computation. But I want to create even finer distinctions with intention/action selection and mechanism/action specification. This is where I take inspiration from David Bohm.
First, let's reword Cisek's original taxonomy.
Homeostasis->Behavior->(Energy Regulation,Movement)->Energy Seeking->(Agency,Exploitation,Exploration,Interaction)->(Intention,Specification)->Learning
Then let's create even more fine-grain distinctions using Rheomode verbs. These are to levate, to vidate, to dividate and to reordinate. These are related to attention, perception, decomposition and ordering.
Now let's through in a mix of C.S. Peirce semiotics and we have a new refreshed and advanced vocabulary for cognition! medium.com/intuitionmachi…
Gone is the impoverished notion of thinking of the brain as like a computer! I'm fed up with conversations going in circles because we have a pis-poor vocabulary! medium.com/intuitionmachi…
We now have a vocabulary that expresses everything that needs to expressed about cognition. This is a first step in a theory of general intelligence.
Let's revisit 4EA - Embedded, Embodied, Extended, Enacted and Affective to see if we've covered all bases in our vocabulary. Cisek's evolutionary perspective captures embedded, embodied and enactive. Extended seems to be an advanced cognitive capability that we need to express.
Extended can be expressed as a generalization of shared intentionality. Shared intentionality (Tomasello) is coordinated behavior with agents. Coordinated behavior with tools is usually what extended implies. Shared information is expressed under the Peirce genuine signs.
Social cognition, imitation, mind-reading and language are advanced cognitive capabilities. These are distinctions of shared intentionality. Sensorimotor empathy is the shared movement. Computational empathy is shared intention and specification.
Symbol grounding (rather detachment) is a downstream capability from computational empathy. This follows the evolutionary path as a consequence of shared behavior: participation->perception->procedure->proposition.
Let's expand this using Gibson's method of defining a relationship between duality of agent and environment. In this case, environment includes other agents that participate with the same intentions.
BTW, I did fail to mention that in Cisek's taxonomy, perception is absent! This is because action and perception are the same thing. You can see underneath action selection is the act of object detection.
In contrast, Bohm's Rheomode, which are verbs (i.e. processes and not things), doesn't have action, only perception! Cognitive evolution is that we begin with action and we evolve into perception. This further evolves into procedural and propositional thinking.
What I'm showing is how the evolution progresses towards higher-order thinking from a kind where action and perception are inseparable. Although we may think that higher-order thinking is separate from action, this is not the case for humans.
The human cognitive cognition as a consequence of our evolution implies the permanent coupling of action and perception. We learn by performing action and not by perception alone!
Human intelligence is a consequence of prior cognitive habits that were developed long before homo-sapiens set foot on this earth. That is why it is senseless for many researchers to ignore evolution.
Here's the proposal:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

21 Oct
It has been proposed that the brain deals with 4 kinds of semantics. Referential semantics, combinatorial semantics, emotional-affective semantics, and abstraction mechanisms. cell.com/trends/cogniti…
Bohm's Rheomode levate, vidate, dividate, reordinate which are abstract cognitive processes overlap but don't align with these semantics. Combinatorial and emotional-affective fits under levate. Referential and abstraction fits under reordinate.
There's a rough correspondence between Bohm's Rheomode and Peirce's triadic thinking: Image
Read 7 tweets
21 Oct
Grady Booch @Grady_Booch and company (i.e. IBM) are now thinking of Fast (Intuition) and Slow (Reflective) AI. New paper with research questions: arxiv.org/abs/2010.06002
Perhaps they should have read amazon.com/Artificial-Int… for some answers to their questions!
The real question however is, does the human mind actually have two cognitive systems (i.e. 1 & 2). Kahneman didn't commit to this. I don't think there are two systems, it's just 1 system. System 2 is just system 1 that's reflective.
Read 4 tweets
20 Oct
I've come to the conclusion that the Brain is an information processor is also a BS definition.
I agree with the notion of the brain being computational. All of reality is computational. Information processing is a metaphor for computation. It's a very bad one, to begin with, because it really says nothing.
Indeed, computation takes information and transforms it (i.e. processing) into other information. But this is just a vacuous statement. The universe is causal in nature and therefore there exists cause followed by effect. Input followed by output.
Read 19 tweets
18 Oct
"The brain is a computer" is a damn problematic metaphor. I prefer to say that "the brain is an intuition machine".
The term computer is conventionally understood as to be a digital computer. It's the kind that we program. It's the kind that is designed by minds and manufactured in assembly lines. It's the kind that can't repair itself. It's the kind without any autonomy.
It is a horrible metaphor. The brain is an intuition machine is a better metaphor. It's that kind that learns from experience. It is the kind that develops in an inside-out manner. It is the kind that creates itself. It is the kind that repairs itself. It is autonomous.
Read 42 tweets
16 Oct
"Shut up and calculate" is the affliction we have when we substitute symbols for understanding.
Humans are linguistic bodies. A huge part of our brains has been exapted (verb form of exaptation) for language. Thus it's conceivable that our innate capabilities for understanding have diminished in use.
Simon DeDeo wrote an insightful tweetstorm about explanations that appear intuitive.
Read 12 tweets
16 Oct
A good explanation of computational intractability and the notion of relevance realization. experiencemachines.wordpress.com/2019/11/13/fiv…. .
The above post has a follow up by @IrisVanRooij irisvanrooijcogsci.com/2020/01/01/sam… .
To summarize, biological brains do not approximate intractable problems. There's no such thing as being able to approximate intractable problems, this is a fiction that so many machine learning researchers fall into.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!