I've come to the conclusion that the Brain is an information processor is also a BS definition.
I agree with the notion of the brain being computational. All of reality is computational. Information processing is a metaphor for computation. It's a very bad one, to begin with, because it really says nothing.
Indeed, computation takes information and transforms it (i.e. processing) into other information. But this is just a vacuous statement. The universe is causal in nature and therefore there exists cause followed by effect. Input followed by output.
The problem with the definition is that it makes no significant distinction between the kind of information and the kind of processing that the brain does. It's an abstraction that is vacuous because it makes no distinction with anything else in reality.
That is a reality that doesn't have magic in it. If your mean of 'information processor' implies the absence of magic, then it is senseless to say it for just the brain and not everything else.
But let's really discuss what bothers me. The word 'representation' as used by people who study the mind. Hopefully, we don't define representation as being the same as information!
What then is the definition of representation in terms of a distinct kind of information. What kind of information is representation? To answer this, we have to answer 'what is information?'
Have the representationalist come up with a definition of information? If so, have they come up with a definition of meaning? I doubt they have. In short, they are also talking BS about something that they don't have a definition for!
BS is when you talk about something being essential but you don't in fact know the definition of that something.
So let's begin with a definition of meaning as distinct from information. I borrow and refine a definition coming from Bateson and Bohm. Meaning is the difference in form that makes a difference in action.
The key insight here is that an organism must recognize information that has 'a difference in form' and know how to effect 'a difference in action'.
A difference in form is the relevant information that an organism can perceive in its environment. Framed in terms of Uexkull's Umwelt, it is information that reveals what is possible for the organism.
The difference in action implies that the organism selects from the relevant information and performs actions that makes a difference. Making a difference implies that the organism is performing intelligent (not random) actions towards its intentions.
Given this definition, how can we define representation that is based on information that is relevant for an organism? The information that is relevant for an organism is the information it learns via its interaction with its environment.
Representation is the subjective information that an organism uses to inform useful action. It is informed by the evolutionary history of the organism's species and the interaction of the living individual with its world.
Experiments from Deep Learning reveals the distributed and non-local nature of parameters that influences behavior. At best, we can argue that a representation exists, but decryption algorithms to make sense of these representations are private to the individual network.
Stephen Wolfram hypothesizes that entropy is a manifestation of our inability to decrypt complex phenomena. This means, what researchers define as representation is effectively hidden behind a blanket of ignorance. It's that information that we cannot reduce to comprehension.
Representation is thus the known unknowable. It's the dark matter of cognition.
Why are the processes of biological cognition inseparable?
If we are to argue for anti-representation (see: Cisek's pragmatic representation or Brette's no-coding) then we should have an explanation of why cognition is non-separable.
Non-separable is a characteristic of a holistic system. This means that a process cannot be decomposed into subcomponent parts. Quantum mechanics @coecke can be framed as non-separability as a first principle.
It has been proposed that the brain deals with 4 kinds of semantics. Referential semantics, combinatorial semantics, emotional-affective semantics, and abstraction mechanisms. cell.com/trends/cogniti…
Bohm's Rheomode levate, vidate, dividate, reordinate which are abstract cognitive processes overlap but don't align with these semantics. Combinatorial and emotional-affective fits under levate. Referential and abstraction fits under reordinate.
There's a rough correspondence between Bohm's Rheomode and Peirce's triadic thinking:
Grady Booch @Grady_Booch and company (i.e. IBM) are now thinking of Fast (Intuition) and Slow (Reflective) AI. New paper with research questions: arxiv.org/abs/2010.06002
The real question however is, does the human mind actually have two cognitive systems (i.e. 1 & 2). Kahneman didn't commit to this. I don't think there are two systems, it's just 1 system. System 2 is just system 1 that's reflective.
Gibson came up with the word affordance. It's derived from the verb 'afford'. I've always liked the term since it implies the recognition of possibilities. en.wikipedia.org/wiki/Affordance
There's a problem though with his method. He took a verb and created a noun. He should have listened to David Bohm who realized that our noun-centric language could be restricting our ability to understand the world. He called his verb-centric language rheomode.
Paul Cisek decided he had enough with the conventional taxonomy of cognition (i.e. input, output, cognition) and decided on a new taxonomy.
"The brain is a computer" is a damn problematic metaphor. I prefer to say that "the brain is an intuition machine".
The term computer is conventionally understood as to be a digital computer. It's the kind that we program. It's the kind that is designed by minds and manufactured in assembly lines. It's the kind that can't repair itself. It's the kind without any autonomy.
It is a horrible metaphor. The brain is an intuition machine is a better metaphor. It's that kind that learns from experience. It is the kind that develops in an inside-out manner. It is the kind that creates itself. It is the kind that repairs itself. It is autonomous.
"Shut up and calculate" is the affliction we have when we substitute symbols for understanding.
Humans are linguistic bodies. A huge part of our brains has been exapted (verb form of exaptation) for language. Thus it's conceivable that our innate capabilities for understanding have diminished in use.
Simon DeDeo wrote an insightful tweetstorm about explanations that appear intuitive.