New paper from me: "Abstraction and Analogy-Making in Artificial Intelligence": arxiv.org/abs/2102.10717
🧵 (1/4)
This paper is part review, part opinion. I argue that conceptual abstraction is driven by analogy, and that analogy is an understudied area of AI that will be essential to overcoming the brittleness and narrowness of AI systems. (2/4)
I review both older and very recent approaches to analogy in AI, including symbolic systems, deep learning, and probabilistic program induction. I then propose some ideas for how to best make progress in this area. (3/4)
I would be very happy to receive any feedback on these ideas! (4/4)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@ezraklein I think your statement is a misunderstanding of what GPT-3 is and what it can do. First, GPT-3 is not "getting better" in any way. It was trained once on a huge amount of text data and then its weights were fixed. It does not improve at all as people use it.
@ezraklein Second, GPT-3 cannot do "much of we do". It can generate paragraphs of human-sounding text, but it doesn't understand what it "says" in any humanlike way. AI is nowhere near being able to do "much of what we do".
@ezraklein Finally, I think we humans don't have good intuitions about two things regarding GPT-3: the scale of the system and its training data -- in effect how much the system can, and has effectively memorized; and how a system that has memorized such a vast amount of language can...
First thought: Why frame this as "DeepMind lost $572 million" rather than "Google made a very large, long-term investment in one of its research arms, DeepMind"? /2
DeepMind's mission is to "solve intelligence" -- a statement I find nonsensical BTW -- but whatever you want to call it, AI is a very hard, long-term research problem and it's great for companies to fund basic, long-term research that doesn't have immediate payoff. /3