Profile picture
Thomas G. Dietterich @tdietterich
, 10 tweets, 2 min read Read on Twitter
Disappointing article by @GaryMarcus. He barely addresses the accomplishments of deep learning (eg NL translation) and minimizes others (eg ImageNet with 1000 categories is small ("very finite") ?). 1/
DL learns representations as well as mappings. Deep machine translation reads the source sentence, represents it in memory, then generates the output sentence. It works better than anything GOFAI ever produced. 2/
Marcus complains that DL can't extrapolate, but NO method can extrapolate. What appears to be extrapolation from X to Y is interpolation in a representation that makes X and Y look the same. This is even more true for logical reasoning than it is for connectionist methods 3/
DL is able to learn such representations better than any previous learning method. But I agree that these are just baby steps toward learning higher abstractions that would enable the kinds of "extrapolations" Marcus seeks 4/
I'm excited by recent work on learning disentangled representations, especially beta-VAEs (ICLR 2017) and the theory of Achille + Soatto (arxiv 1706.01350) relating compression and minimality to disentanglement. 5/
I'm also interested in meta-learning methods that reflect on the low level DL-learned representations. Maybe they will be able to learn higher-level abstractions? 6/
I believe that learning the right abstractions will solve the key problems Marcus mentions: data hungriness, vulnerability to adversarial examples, failure to extrapolate, lack of transparency. 7/
DL is essentially a new style of programming--"differentiable programming"--and the field is trying to work out the reusable constructs in this style. We have some: convolution, pooling, LSTM, GAN, VAE, memory units, routing units, etc. 8/
But no one thinks we have a complete set. No one knows the limits of differentiable programming. But we continue to make rapid progress, and our theoretical understanding is improving too. 9/
We certainly need more theory and better engineering, but there are many many promising research ideas to pursue. end/
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Thomas G. Dietterich
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!