What is wrong with knowledge representations that it has barely moved the needle in machine understanding? @danbri
Intuitively, KR should be useful in that it diagrammatically records how concepts relate to other concepts. Yet for a reason that is not apparent, it isn't very useful in parsing out new understandings of the concepts in its graph. Where did we go wrong here?
Perhaps it's because knowledge graphs are noun-centric and not verb-centric. Reality is verb-centric. To get an intuition about this, watch this explanation of the open-world game Nethack:
Wolfram @stephen_wolfram has a nuanced take on language understanding. For the narrow domain of question and answering, his symbolic system should be able to derive the answers to known questions. wolfram.com/language/princ…
Wolfram Language is an interesting framing. Are AIs essentially advanced programming languages? Does it make sense that our existing programming languages are decoupled from knowledge about the world?
Ben Goertzel in his roadmap for AGI describes a system that is also based on rewrite rules. Is this the problem with KBs in that they are declarative and not imperative? Is knowledge in general intelligence stored in imperative form?
What does it mean for knowledge to be stored in interpretative form and how is this form more useful than declarative forms? The problem with declarative forms is that information as to its interpretation is made implicit. It is not encoded anywhere in the knowledge graph.
There exists a continuum of what is made implicit in the serialization of imperative information. For example, machine language is more detailed than code written in a high-level programming language. It is the interpreters and compilers that translate code into exact actions.
The information held in KGs is rigid in the sense that the concepts are interconnected in rigid predefined connections. This rigidity does not accommodate the flexibility of thought required by general intelligence
Contrast this with the fuzzy nature of a language model like GPT-3. Why is it that we cannot generate interesting content from a KG as compared to GPT-3? Why does GPT-3 capture knowledge in a more human-like manner than that of a manually encoded KG?
To be fair, GPT-3 knows nothing about this world. But it knows almost everything that is useful about English syntax. Entangled in English usage are fragments of semantic relations that GPT-3 is very proficient in mimicking.
What then is the solution? Simple, train GPT-3 on this yet to described imperative knowledge encoding.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

3 Aug
In Frank Herbert's Dune, the affairs of the entire universe revolve around a psychedelic drug known as the Spice Melange that enables beings the ability to fold spacetime and see into the future. dune.fandom.com/wiki/Spice_Mel…
The spice is essential because without it travel between planets in the universe would be practically impossible. The universe is interconnected in commerce through psychedelics.
The spice however also allows its consumers to see into the future. Hence to make predictions of what might come. Does not one find it odd that success in our modern financial industry relates to our ability to see into the future?
Read 23 tweets
2 Aug
Why are models that curve fit not compositional models? medium.com/intuitionmachi…
Why are neural networks unable to nail arithmetic or multiplication? That is, you might be able to ask GPT-3 what 5 plus 7 equals, but you can't calculate 59 + 77 (trust me, it can't). Why is that?
This is because neural networks are unable to formulate compositional models of reality. Would a caveman be able to invent arithmetic or multiplication? I seriously doubt it, it requires a gifted human individual to invent these from scratch.
Read 9 tweets
1 Aug
Noun-centric thinking is making it difficult to explain and understand the dynamics of covid19. People seem to not understand that everything is a process.
Knowledge discovery is a process. Covid19 is novel because it is a new virus and its characteristics are unknown without further investigations. Of course, one has to balance the time required for information and the need to act swiftly.
Science is a knowledge discovery process. It is not a label you slap on to something that remains unchanging for all time. Yes, we do these for food items up to an expiration date. But after the expiration date, all labels attached to the food item aren't expected to hold.
Read 17 tweets
1 Aug
I just learned a surprising thing that was not obvious to me. Good writing is like good visualization. Good writing pays attention to good geometry.
It's obvious to a good programmer how to structure their code for easier comprehension. They've developed style guides and in the industrial application of programming, collective consistency is critically important. Scalable software engineering requires efficient communication.
But we take this for granted in natural language. We are so accustomed to the interpretation of natural language that we forget to sculpt our words in a manner that is easily understood.
Read 7 tweets
1 Aug
There's a morbid reason why red states are encouraging the spread of covid. They are angry because they lost the election (see polls). In retaliation they want people to suffer.
This mirrors the draining of the public pools. Many places in the USA favored draining and closing down their public pools over the possibility of races mixing together and enjoying the benefits of the pool. amazon.com/Sum-Us-Everyon…
Many are willing to suffer just as long as the people they hate also suffer with them. It's an ugly mindset, but it's surprisingly true of many policies in many states in America.
Read 14 tweets
1 Aug
Hopium is a pathetic survival strategy. Anti-vaxxers are praying that the Delta variant is less dangerous than previous variants without having numbers to back up the belief. Indeed, the UK death rate is low, but 90% of the population has at least 1 jab.
Here's the death data coming out of Russia. 23% have been vaccinated with their Sputnik vaccine. It looks very ugly.
In contrast, here's the UK deaths data.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(