Just as quantum mechanics is unintuitive to humans, it is likely that parallel distributed computation is also unintuitive. NAND gates are not intuitive. SK combinators are not intuitive. The building blocks of cognition are likely unintuitive as well.
Human minds are simply incapable of explaining how human minds work. At best we can explain the emergent properties, but not the underlying mechanisms.
Of course, we must have a good metaphor to partially explain human cognition. We need them so that we can formulate explanations for methods of teaching, decision-making, and idea generation. We cannot be blind to human cognitive nature.
Everything we do to structure our society is based on our understanding of human cognition. The differences in policy of conservatives, liberals, libertarians, etc are based on different understandings of human cognition.
Humans think others think the same way that they do and therefore propose policies that favor their kind of thinking. That is why there is so much pushback on something like UBI.
So I haven't given up on trying to find a better metaphor for human cognition. It would be a very troublesome scenario if we have machines predict how we think but unable to explain why they make a prediction.
Super intelligent machines will need the kind of empathic intelligence to explain to us lowly humans in metaphors that we understand. A lot may be lost in translation, but that's the best we can hope for!
The brilliance of someone like Richard Feynman was that he was able to explain complex physics in ways that we could intuitively understand.
But intuitively understanding something is not the same as understanding. Feynman could explain things in simpler terms but that did not convey all that he understood about the phenomenon. It just gave you a chance to grasp one part of the entire picture.
To explain something to someone, you need to discover what that person currently understands and then work out how to connect that understanding to a new understanding. For complex concepts, that chain can be very long and many parts of that chain are not intuitive.
It's the non-intuitive parts of that chain that are problematic. This is because what is intuitive for one person may not be intuitive for another. It takes time to marinate in one's mind for newly introduced concepts to become intuitive.
I struggled for several months when I learned about Bitcoin for the first time. I had assumed that the concept of money was intuitive. It actually isn't! That's what trips so many people up!
Biology is similar, we are so accustomed to being embedded in a living world. Yet we don't realize the unintuitive complexity all around us. That is because we see biology from an uninformed lens.
Do you recall the experience in early 2020 when people were not acting aggressively enough to mitigate risk against the covid virus? People could not grok exponential growth and thus took the wrong actions.
Are we also at this stage with regards to exponential growth in AI capabilities? Are we at this stage with regards to the exponential growth of decentralized finance?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

13 Jun
The "Reward is Enough" paper offers a piss-poor explanation as to how AGI is achieved. I'm actually more surprised that the @DeepMind wrote such a poorly constructed philosophical paper. sciencedirect.com/science/articl…
Keneth Stanley @kenneth0stanley actually has a much better construction which he presents in this paper. dl.acm.org/doi/abs/10.114…
The major flaw of the 'Reward is Enough' paper is that the authors don't know how to disentangle the self-referential nature of the problem. So it's written in a way that sounds like a bunch of poorly hidden tautological statements.
Read 10 tweets
12 Jun
Interview with Walid Saba who appears to have a core ontology with just 2,000 types.
This is his layering. What you will note is that the middle layers are 'affordances' of the nouns in their arguments. Image
Where he has some primitive types. @markburgess_osl has a similar set. Image
Read 5 tweets
12 Jun
Tim Scarfe @ecsquendor with an interview with the mad man @coecke
Damn, had to slow down from 2x to 1.5x to even understand @coecke
Haha, I agree, I also don't understand the difference between strong or weak emergence! And I'm writing a book with emergence in it's title! gum.co/empathy
Read 7 tweets
11 Jun
Stumbled upon this new videocast with @GaryMarcus @luislamb @ykilcher . I must say that introduction was quite good.
I'm a bit perplexed about the achievement of Cyc. Have symbolic systems ever achieved a level of robustness to be of real utility? Not sure why this panel is all praise about it.
Yanic asked a good question that you can't just throw around buzzwords like 'abstractions' and 'semantics' without proposing an approach on how to achieve it. It's not clear how you get from symbolic manipulation into common sense.
Read 13 tweets
11 Jun
It must be difficult being a neuroscientist. It's like being an alchemist before the periodic table was discovered.
Just like alchemists at the time of Newton, the tools and models to explore their domain are completely absent. You cannot make progress if you have no capability of observing and interpreting what's going on.
To be fair, neuroscience isn't about understanding cognition. It's about understanding the physical nature of the brain. Cognition is a virtual thing. The difference between hardware and software.
Read 7 tweets
11 Jun
Existing models of neurons or even single cells are woefully inadequate to simulate what's going on in the brain. Standard models are based on toy models that are conveniently easy to simulate. Scientific research has a bias toward the tools it has at its disposal.
However, we also should not underestimate the complexity that simple components can generate. Conversely, we can't ignore the consistency of behavior that a collection of complex parts generate.
The truth about general intelligence like the brain lies somewhere in between. Humans are complex beings, yet there exists a consistency of how collections of humans behave. Civilization would not be possible if not for common behavior that leads to emergent behavior.
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(