The only thing that remains constant is change. If you think about it, what is constant is relative to time. Furthermore, what is constant is relative to what is moving. For physicists, something that is constant (i.e invariant) describes symmetry.
A symmetry is furthermore defined as a *change* in a reference frame. In otherwords, you can't define a constant unless there is something that changes.
What is constant is what is perceived to not change. It is the very definition of an abstraction. We are able to perceive the world because we abstract the world into things that don't change. Categories are also things that don't change.
But it's all an illusion. Things always change. So to be a competent agent, one has to understand change. How does one constant lead to another constant. How does a bunch of related constant lead to another bunch of related constants? How does one sequence lead to another?
How does a subjective agent learn about these transformations from one set to another set?
Simple. Agents interact with the world to discover how a set of constants transforms into another set of constants. How nouns transform to other nouns via verbs.
But nouns aren't supposed to change. Therefore, how we think about the world is how nouns interact with other nouns. Semantics are embedded in what nouns can do to other nouns.
What nouns do to other nouns can be constant. In short, they can be predictable. Because predictable means that in a different time frame, the same thing happens. That's because the context is the same at a different time frame.
Nouns can either be inanimate or subjective agents. Different species treat these things differently. Humans mostly treat plants like they are inanimate. My dog treats balls like they are alive.
The path to more complex intelligence relates to how agents predict other subjective agents. Unfortunately, almost all AI approaches assume agents predicting non-subjective agents. Self-play is different, because it assumes making prediction against a subjective agent.
Curve fitting is really good to predict the behavior of inanimate objects. When we set up our predictions is the context of an adversarial or cooperative game, we introduce a new kind of interaction that requires contextual interpretation.
Games like Go and Chess are solvable via self-play because all context is available by its participants. In true reality, the context of an external agent is only partially transparent to the observing agent.
Agents make predictions about other agents by maintaining a model of the other agent's cognition. But where do these models come from? The only model that a subjective agent knows of is the model of itself.
This is where it begins its bootstrap process. An agent bootstraps its cognition by being learning from the behavior of other subjective agents in its presence.
Mammals bear and rear their young. The bootstrap process is much more complex than the bootstrap process for other animals the lay eggs and don't rear their young. If you have teachers then it's not entirely a bootstrap process!
Good teachers know how the minds of their students work. So it is the empathy of teachers that accelerates the learning of students. Also, it is the willingness to learn by students that catalyzes this process. Thus importance of openness, conscientiousness, agreeableness, etc
The big five in fact defines different dimensions for willingness to learn.
Let me finally end this with human-level intelligence. Brian Cantwell-Smith proposed a powerful definition of computation. He wanted to know what is computation in the context of the human experience.
We use computation as tools. These are tools that take our intention and realizes them through the execution of instructions. A programmer maps intention to instructions and a computer translates instructions to realization.
However, an intelligent program does not require the mapping of the programmer. An AI, just like a human, has the capability of defining its own mapping from the intention of its user to the step-by-step realization of that intention.
The core of intelligence is thus this mapping. But here's where it gets interesting. There are two relationships here. The first is that of the user of tools. I intend to do something and I know how to use tools to achieve my goal. This is the conventional AI stance.
But there's a second relationship, the interpretation of the intention. This is what humans excel at doing. That is because to be competent in a social setting requires that you understand why other people do what they do.
What word in the English dictionary is this capability? It's known as empathy. gum.co/empathy

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

16 Jun
It is often said that money buys freedom. But it is rarely mentioned that money also buys stability and reduces uncertainty. This other utility is what people seek more than freedom.
A majority of us will sacrifice our freedom and our youth for the stability and certainty of a steady paycheck. They say money buys happiness...
but in reality, it buys predictability. Because as humans, we value competence in this world and you cannot feel competence if you cannot predict the world.
Read 18 tweets
14 Jun
An unexplored scenario for humanity is that as technology becomes more advanced, we become more adept at detecting aliens. But the aliens have no interest in our affairs, so we know of their existence but they never attempt to interact.
The prime directive envisioned in Star Trek may in fact exist and we just happen to be that backward civilization. It's just like how we treat tribes in the Amazon. We allow them to thrive in complete isolation.
Any advanced civilization will likely have a history where members of their race chose to live in isolation. So it should be no surprise that they would leave our civilization alone to grow up for ourselves.
Read 7 tweets
14 Jun
So I'm reading this blog from Walid Saba where he lends 5 examples that are difficult to do in NLP. medium.com/ontologik/sema…
(1) Sara likes to play bridge
(2) Sara has a Greek statue in every corner of her apartment
(3) Sara loves to eat pizza with her kids (with pineapple)
(4) Sara enjoyed the movie
(5) The White House criticized the recent remarks made by Beijing
I ran these against GPT-3 to validate its understanding of them. GPT-3 appeared to 'know' that there was more than 1 statue for (2) and wasn't able to resolve what (4) might mean. However, it disambiguated the others correctly.
Read 8 tweets
14 Jun
Artificial General Intelligence is the field with a quest to automate human cognition. AGI is not superintelligence.
Computers and handheld calculators can perform calculations that are beyond the capabilities of any human. We don't call these intelligent things, that's despite 'superhuman' computatoinal capability.
Can AGI lead to superintelligence? The current consensus is yes.
Read 7 tweets
14 Jun
I agree that a single deep learning network can only interplay. But a GAN or a self-play network can obviously extrapolate. The question though is where is ingenuity coming from? medium.com/intuitionmachi…
I'm actually very surprised that @ecsquendor who has good videos summarizes the state of understanding in deep learning is fawning over @fchollet ideas. I'm perplexing what Chollet calls extrapolation:
There's also this depiction of different kinds of generalization. I do find it to be very odd and outright wrong.
Read 25 tweets
13 Jun
The "Reward is Enough" paper offers a piss-poor explanation as to how AGI is achieved. I'm actually more surprised that the @DeepMind wrote such a poorly constructed philosophical paper. sciencedirect.com/science/articl…
Keneth Stanley @kenneth0stanley actually has a much better construction which he presents in this paper. dl.acm.org/doi/abs/10.114…
The major flaw of the 'Reward is Enough' paper is that the authors don't know how to disentangle the self-referential nature of the problem. So it's written in a way that sounds like a bunch of poorly hidden tautological statements.
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(