Does artificial anthropocentric intelligence lead to superintelligence?
When I use the term Artificial General Intelligence, my meaning of 'General' comes from the psychology definition of the G-factor that is tested in general intelligence tests.
It is an anthropocentric measure. The question that hasn't been explored in depth is whether a "human-complete" synthetic intelligence leads to a superintelligence. The prevailing assumption is that this expected.
I am going to argue that this assumption may not be true.
The assumption that goes into AGI automatically exploding into a superintelligence is driven by the bias that human intelligence is perceived at the pinnacle of all intelligence.
Humans, like all other living things with brains, have their cognition forged by their umwelt and their environment. Our cognition is the way it is because it is what is suited for the niche we evolved into.
We have not evolved into high speed symbolic and math processors because in most of the 200,000 years of human existence, this skill wasn't that important. Computers are certainly better than us in many tasks of a cognitive nature.
Instead of evolving to achieve a capability, humans have the ability to create tools to compensate for a lack of ability. We have invented computers because we are very error-prone and slow computers.
A synthetic general intelligence is a kind of automation that is able to understand our intentions. It is also something that is autonomous and understands the social context that it is in. It is not something that computes fast, we already have that in computers.
These are cognitive skills that are useful for humans, but they aren't necessarily the same skills that might be needed for solving all kinds of complex problems.
As an illustration, AlphaZero is better than its predecessor AlphaGo because it trained from scratch without human gameplay as its training set. It plays in a way that is not encumbered with the bias of human play.
Human cognition is loaded with a lot of excess baggage that evolved over eons. A human-complete synthetic would also share these biases. After all, to understand a human one has to be in a framework of being a human. Human biases and all.
Like AlphaGo, these biases may hinder performance. In Star Wars, there's this droid C3PO that is meant is billed as a protocol droid for 'human cyborg relations'. That is what a synthetic AGI will likely be.
That bridge between humans and yet another kind of specialized intelligence.
Therefore, for a superintelligence, an AGI is more like a module that's required for relating to humans. It's an appendage and not core functionality.
But... this can't be true! Humans are the pinnacle of intelligent life, our cognition cannot be something meant only for the periphery. I'll say the objection here is a consequence of our all too human anthropocentric bias.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

27 Jan
Constructivism is perhaps the most important idea that will shape the future of humanity. The ills of society are a consequence of the willful ignorance of constructivism.
There are two important definitions of constructivism, one comes from mathematics and the other from psychology. The mathematical definition leads rejects the law of excluded middle. It's relevant in understanding causation.
The psychological definition: "Humans actively construct their own knowledge, and that reality is determined by our experiences as a learner."
Read 34 tweets
27 Jan
@gershbrain Almost there. Just as the word 'create' glosses over too many things, the word 'generalize' does that too. What is need is an idea that straddles between the two.
@gershbrain Your argument against create employs a degenerate notion of create (i.e. brute force copying)(degenerate in the math and physics sense). The notion of creation is that there is some competence (however developed) that leads to the replication of the features of the target system.
@gershbrain But in this word create, is an entire logic of constructivism. An intuitionistic logic to more precise. The cognitive dissonance in the quote is that Feynman was a theoretical physicist.
Read 9 tweets
26 Jan
Lazy Twitter: Is the phrase 'symmetry breaking' unintuitive? Just asking because I like using the phrase.
Yes, I agree that it's a difficult phrase to parse. That's because you have to understand what symmetry is meant for physicists and how it is used to analyze a system.
Then once you've understand what symmetry means, you introduce a notion that is not the opposite of symmetry (i.e. antisymmetry) but rather a verb that says the original symmetry transitions to a state of non-symmetry. The process of breaking is what interests physicists!
Read 5 tweets
26 Jan
Human intuition sees reality through a deceptive lens. A lens that is biased to seek objects rather than noticing processes. Modern language shares this noun centric bias.
The notion that species were fixed and never changing was ubiquitous in Darwin's time. Darwin broke this deceptive symmetry in arguing that all life was connected through a process we now know as evolution.
During Einstein's time matter was thought to also be fixed and unchanging. Einstein broke this deceptive symmetry that matter itself had a continual exchange with energy. Hence he formulated his famous relativistic equation E=mc^2.
Read 20 tweets
25 Jan
I think there's a lot of debate as to what the G in Artificial General Intelligence means. See: togelius.blogspot.com/2021/01/copern…
I prefer it to mean, human-complete intelligence. This of course doesn't mean super-intelligence. It just means it's a kind of artificial intelligence that has the same capability as humans.
But does an AI with human-level cognitive capabilities very quickly evolve into superhuman AI that can solve everything? I will be wary about statements about 'solving everything'. The universe is open-ended and thus, it is impossible for something to solve every problem.
Read 7 tweets
25 Jan
Lazy Twitter: Why do I think Reinforcement Learning is BS? What insight am I missing here?
All the heavy lifting appears to be performed by the neural networks. The only thing RL seems to be doing is performing an ad-hoc search method for finding new training data. RL is a glorified data collection procedure!
The self-play methods are indeed very compelling, but this idea is independent of RL. It's more similar to how you train a GAN where there are two networks that are cooperatively/competitively tuning their models.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!