Interview with Walid Saba who appears to have a core ontology with just 2,000 types.
This is his layering. What you will note is that the middle layers are 'affordances' of the nouns in their arguments.
Where he has some primitive types. @markburgess_osl has a similar set.
This reminds me also of semantic primes.
Humans as well as animals have an innate vocabulary for understanding the world. It is however interesting that even among human languages that there are primitives that are present in some and absent in others.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The only thing that remains constant is change. If you think about it, what is constant is relative to time. Furthermore, what is constant is relative to what is moving. For physicists, something that is constant (i.e invariant) describes symmetry.
A symmetry is furthermore defined as a *change* in a reference frame. In otherwords, you can't define a constant unless there is something that changes.
What is constant is what is perceived to not change. It is the very definition of an abstraction. We are able to perceive the world because we abstract the world into things that don't change. Categories are also things that don't change.
I agree that a single deep learning network can only interplay. But a GAN or a self-play network can obviously extrapolate. The question though is where is ingenuity coming from? medium.com/intuitionmachi…
I'm actually very surprised that @ecsquendor who has good videos summarizes the state of understanding in deep learning is fawning over @fchollet ideas. I'm perplexing what Chollet calls extrapolation:
The "Reward is Enough" paper offers a piss-poor explanation as to how AGI is achieved. I'm actually more surprised that the @DeepMind wrote such a poorly constructed philosophical paper. sciencedirect.com/science/articl…
The major flaw of the 'Reward is Enough' paper is that the authors don't know how to disentangle the self-referential nature of the problem. So it's written in a way that sounds like a bunch of poorly hidden tautological statements.
Damn, had to slow down from 2x to 1.5x to even understand @coecke
Haha, I agree, I also don't understand the difference between strong or weak emergence! And I'm writing a book with emergence in it's title! gum.co/empathy
I'm a bit perplexed about the achievement of Cyc. Have symbolic systems ever achieved a level of robustness to be of real utility? Not sure why this panel is all praise about it.
Yanic asked a good question that you can't just throw around buzzwords like 'abstractions' and 'semantics' without proposing an approach on how to achieve it. It's not clear how you get from symbolic manipulation into common sense.
Just as quantum mechanics is unintuitive to humans, it is likely that parallel distributed computation is also unintuitive. NAND gates are not intuitive. SK combinators are not intuitive. The building blocks of cognition are likely unintuitive as well.
Human minds are simply incapable of explaining how human minds work. At best we can explain the emergent properties, but not the underlying mechanisms.
Of course, we must have a good metaphor to partially explain human cognition. We need them so that we can formulate explanations for methods of teaching, decision-making, and idea generation. We cannot be blind to human cognitive nature.