New paper from me: "Abstraction and Analogy-Making in Artificial Intelligence": arxiv.org/abs/2102.10717

🧵 (1/4)
This paper is part review, part opinion. I argue that conceptual abstraction is driven by analogy, and that analogy is an understudied area of AI that will be essential to overcoming the brittleness and narrowness of AI systems. (2/4)
I review both older and very recent approaches to analogy in AI, including symbolic systems, deep learning, and probabilistic program induction. I then propose some ideas for how to best make progress in this area. (3/4)
I would be very happy to receive any feedback on these ideas! (4/4)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Melanie Mitchell

Melanie Mitchell Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @MelMitchell1

27 Nov 20
@ezraklein I think your statement is a misunderstanding of what GPT-3 is and what it can do. First, GPT-3 is not "getting better" in any way. It was trained once on a huge amount of text data and then its weights were fixed. It does not improve at all as people use it.
@ezraklein Second, GPT-3 cannot do "much of we do". It can generate paragraphs of human-sounding text, but it doesn't understand what it "says" in any humanlike way. AI is nowhere near being able to do "much of what we do".
@ezraklein Finally, I think we humans don't have good intuitions about two things regarding GPT-3: the scale of the system and its training data -- in effect how much the system can, and has effectively memorized; and how a system that has memorized such a vast amount of language can...
Read 5 tweets
14 Aug 19
A thought-provoking article by @GaryMarcus . Accordingly, I had some thoughts! Longish thread ahead. /1 wired.com/story/deepmind…
First thought: Why frame this as "DeepMind lost $572 million" rather than "Google made a very large, long-term investment in one of its research arms, DeepMind"? /2
DeepMind's mission is to "solve intelligence" -- a statement I find nonsensical BTW -- but whatever you want to call it, AI is a very hard, long-term research problem and it's great for companies to fund basic, long-term research that doesn't have immediate payoff. /3
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!