Almost all models of cognition are metaphors. They are useful for humans to explain what goes on in the brain. But these models are insufficient for understanding how the brain works.
There are two kinds of scientific models. Descriptive models and generative models. A good example of this is the relationship between thermodynamics and statistical mechanics. The former is a descriptive model and the latter is a generative model.
A descriptive model informs its equations through experimental data. Effectively, they are a gross way to curve fit what is observed. A generative model however generates what is observed from the interaction of the parts of the system.
A descriptive model cannot explain emergent phenomena. Only a generative system can exhibit emergent phenomena. It is the consequence of the computation intrinsic in a generative system that leads to holistic behavior. That is behavior that is greater than the sum of its parts.
A descriptive model therefore essentially a metaphor of how the system behaves. Metaphors are useful because they serve as short-cuts for us to reason as to how systems behave. We are spared the need to run the computation ab initio.
In other words, they are also like what Peirce describes as symbols. Symbols are normative models that spare us from thinking about the details. They work well as long as the symbols have grounding with the truth.
They are however disastrous when they have no grounding. In our present political discourse, we already see how words without grounding with the truth can have an outsized and detrimental effect.
The reductionist methods of science have their value in grounding our descriptive models. All biology is bounded by what's possible in physics. Emergence generates behavior that is unexplained, but it never violates the laws of physics.
Humans however have limited powers of explanation. We have limits to how deep we can connect the dots. We rely on metaphors and stories. We can only follow very short stories. Too much detail can create too much confusion. This is why we can't explain generative models.
This perhaps is illustrated by how we use computers to predict the weather. Meteorologists run multiple models of the weather to predict the possible paths of an incoming hurricane. We employ shortcuts like wind speed, precipitation, size etc. to reason about possible damages.
We don't bother ourselves with the details of the predictive computation. At best we mention the historical reliability of the simulation as compared to other simulations. At best we have a variety of possible paths that the hurricane will traverse.
Underneath the weather simulations is a smorgasbord of algorithms derived from first principles or heuristics that were discovered to be useful. You set the parameters and let the computer crunch away at a prediction.
There are layers of complexity that create layers of emergent phenomena that eventually bubble up to get you the final prediction of the possible paths of a hurricane. This complexity is found in inanimate things like hurricanes, what about complex things based on living cells?
What is difficult for humans to reason about complex systems is that the causal relationships are not one-way. The parts interact and feedback to themselves. That is why many filler words we use to explain causal relationships in a brain have a vacuous meaning.
We can punt on rely on causal filler verbs and instead use metaphors to describe the many ways that the brain might achieve homeostasis. This kind of analysis seems to be predominantly used by psychologists and psychoanalysts. Mental wellbeing is worded in verbs of balance.
Unfortunately, a description of the behavior of a system maintaining balance doesn't explain the mechanism of how that balance is achieved. It took decades to achieve the kind of balance in robots like those developed by Boston Dynamics:
Dance is in fact difficult to capture in written form. We don't have good descriptive models of dance. Yet dance is something we can naturally explain. theparisreview.org/blog/2015/02/0…
But to make a machine generate dance is an entirely different problem. This illustrates the vast difference between the complexity of a descriptive model and a generative model.
But why is it that a professional dancer can take a descriptive model of dance and perform it in a manner that will satisfy the describer? While we novices can only do so if we actually see and imitate a dance?
This is because, for the skilled, a descriptive model drives a generative model. A professional dancer can interpret dance instructions that require a lot fewer details than what is encoded in the instructions.
One could make the metaphor that intelligence is the ability to follow instructions that are absent of detail. How does a stem cell interpret the instructions of DNA to eventually arrive at an entire living being?
Just like dance instructions, DNA instruction is a generative language for generating emergent new behaviors. You see, emergence happens because interpreters like cells and humans understand generative languages. medium.com/intuitionmachi…
Repeat synonyms enough time, people will believe it is true! Some new suggested filler verbs to describe cognition.
But if you prefer the brain as a homeostatic system, then these filler verbs should be part of your vocabulary.
A problem though with the homeostatic system is that it is framed in terms of final behavior. It does not describe the generative mechanism. IMHO, the distributed consensus or the game theoretic formulation has greater appeal. deepmind.com/blog/article/E…
If humor has so many benefits (i.e. positive feelings, conflict de-escalation, improving relationships, enhancing creativity, improving marketability, etc.) then why is it not studied as much? Clearly, humor has immense utility.
Perhaps it's because humor is a risky business? Failed attempts at humor can lead to unfavorable impressions. But isn't the consequences of failure imply the need to understand humor better?
To set some ground rules, what we find funny springs from our intuition. What springs from our intuition is what we accumulate through our own experiences. What is funny is a subjective and personal experience.
Some more GPT-3 generated quotes of wisdom. What is life? 1- Life is a series of untimely interruptions. 2- Life is a disease that can be transmitted by spitting. 3- Life is what happens to you while you are making other plans
4- Life is a series of one-way journeys with only two destinations: Where you started, and where you are now. 5- Life is a game. I don’t know how to play it. Nobody told me there were rules. 6- Life is a journey, a journey is a series of choices, choices determine destiny.
GPT-3 answers to 'what is time?'. 1- Time is what a clock measures; what a clock measures is not necessarily what your heart feels. 2 - You mean 'Time is'. 3- Time is an illusion. Lunchtime doubly so. 4- Never waste time; it's the stuff life is made of.
Is it not strange that for some, the significant events of history occurred in a virtual world? I actually wasn't aware of this posting, rather, I was aware when Linux was announced by Torvalds (which came 15 days later).
Social networking was primarily on Gopher at that time. Perhaps the only reason I was interested in Linus announcement was that I was taking an OS course at that time and we were playing around with Minix. I never was an OS aficionado though.
I left university shortly after and the corporate world. There I was actually cut off from the goings-on in the internet for about a year. It was only when I joined IBM did I got exposed to the WWW.
Some interesting GPT-3 quotes. 1- Economists have predicted nine out of the last five recessions. 2-The early bird might get the worm, but the second mouse gets the cheese. 3- When in doubt, try using a bigger hammer.
4-Insanity is a perfectly rational adjustment to an insane world. 5-Awe is the sense of wonder you feel when you see something that looks like it’s too big to be true. 6- To be sure of hitting the target, shoot first and call whatever you hit the target.
7- If at first you don’t succeed, destroy all evidence that you tried. 8-The hardness of the butter is proportional to the softness of the bread. 9-The severity of the itch is inversely proportional to the ability to reach it.
More GPT-3 jokes. (1) Why did the Anarchist cross the road? To get to the chicken side of the free-range anarchist commune. (2) Why did the Atheist cross the road? There was no chicken, so he didn't. (3) Why did the Catholic cross the road? To get to the other confession.
(4) Why did the Evangelist cross the road? To witness to the chicken. (5) Why did the Hindu cross the road?
To get around the chicken. (6) Why did the Agnostic cross the road? To see whether the chicken was on the other side.
(7) Why did the Jihadist cross the road? To increase the body count on the other side. (8) Why did the physicist cross the road? To see what would happen. (9) Why did the theoretical physicist cross the road?
Because it was his field.