So I'm reading this blog from Walid Saba where he lends 5 examples that are difficult to do in NLP. medium.com/ontologik/sema…
(1) Sara likes to play bridge (2) Sara has a Greek statue in every corner of her apartment (3) Sara loves to eat pizza with her kids (with pineapple) (4) Sara enjoyed the movie (5) The White House criticized the recent remarks made by Beijing
I ran these against GPT-3 to validate its understanding of them. GPT-3 appeared to 'know' that there was more than 1 statue for (2) and wasn't able to resolve what (4) might mean. However, it disambiguated the others correctly.
It doesn't work out of the box, but with sufficient scaffolding, you get better than reasonable ability to disambiguate sentences. Without a doubt, GPT-3 remains a very promising foundation to build more precise NLU systems.
Here's the deal though, when GPT-3 processes a statement, you have to ask it questions to probe its understanding of the statement. But where do these questions come from and does GPT-3 only answer them only when it is asked?
The key idea that many seem to overlook about cognition is that understanding involves asking questions. People understand statements differently because they are subconsciously asking different questions.
The more advanced thinkers ask questions that more probing. It is when these questions trigger an unexpected answer when they arise to our consciousness. Otherwise, our minds are subconsciously busy answering questions that build a picture of the world.
It is analogous to what's going on in the Dall-E system where interpretations are being ranked for relevance. medium.com/intuitionmachi…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
My new book storage and retrieval system. The containers are see through of I can find a book. You can place books on both sides for higher density storage.
Books in my current focus move to the top container for easy access. Like bubble sort, those of lesser interest move to the bottom. The stack maintains its height by pruning books at the bottom and turning them into PDF.
The added benefit of this system that you have to exert additional physical effort to retrieve books at the bottom. Therefore incentivizing the need to have them digitized.
It is often said that money buys freedom. But it is rarely mentioned that money also buys stability and reduces uncertainty. This other utility is what people seek more than freedom.
A majority of us will sacrifice our freedom and our youth for the stability and certainty of a steady paycheck. They say money buys happiness...
but in reality, it buys predictability. Because as humans, we value competence in this world and you cannot feel competence if you cannot predict the world.
An unexplored scenario for humanity is that as technology becomes more advanced, we become more adept at detecting aliens. But the aliens have no interest in our affairs, so we know of their existence but they never attempt to interact.
The prime directive envisioned in Star Trek may in fact exist and we just happen to be that backward civilization. It's just like how we treat tribes in the Amazon. We allow them to thrive in complete isolation.
Any advanced civilization will likely have a history where members of their race chose to live in isolation. So it should be no surprise that they would leave our civilization alone to grow up for ourselves.
Artificial General Intelligence is the field with a quest to automate human cognition. AGI is not superintelligence.
Computers and handheld calculators can perform calculations that are beyond the capabilities of any human. We don't call these intelligent things, that's despite 'superhuman' computatoinal capability.
Can AGI lead to superintelligence? The current consensus is yes.
The only thing that remains constant is change. If you think about it, what is constant is relative to time. Furthermore, what is constant is relative to what is moving. For physicists, something that is constant (i.e invariant) describes symmetry.
A symmetry is furthermore defined as a *change* in a reference frame. In otherwords, you can't define a constant unless there is something that changes.
What is constant is what is perceived to not change. It is the very definition of an abstraction. We are able to perceive the world because we abstract the world into things that don't change. Categories are also things that don't change.
I agree that a single deep learning network can only interplay. But a GAN or a self-play network can obviously extrapolate. The question though is where is ingenuity coming from? medium.com/intuitionmachi…
I'm actually very surprised that @ecsquendor who has good videos summarizes the state of understanding in deep learning is fawning over @fchollet ideas. I'm perplexing what Chollet calls extrapolation: