, 13 tweets, 2 min read Read on Twitter
Rich Sutton has a new blog post entitled “The Bitter Lesson” (incompleteideas.net/IncIdeas/Bitte…) that I strongly disagree with. In it, he argues that the history of AI teaches us that leveraging computation always eventually wins out over leveraging human knowledge.
I find this a peculiar interpretation of history. It’s true that many efforts to incorporate human knowledge into AI have been discarded, and that more tend to be discarded as other resources (not just computation, but memory, energy, and data) become plentiful.
But the success of the resulting approaches depends not only on those plentiful resources but on the human knowledge that was NOT discarded.
Good luck trying to do deep learning without convolutions, LSTMs, ReLUs, batch normalisation, etc. Good luck trying to solve Go without the prior knowledge that the problem is stationary, zero sum, and fully observable.
So the history of AI is not the story of the failure to incorporate human knowledge. On the contrary, it is the story of the success of doing so, achieved through an entirely conventional research strategy: try many things and discard the 99% that fail.
The 1% that remain are as crucial to the success of modern AI as the massive computational resources on which it also relies.
Sutton says that the intrinsic complexity of the world means we shouldn’t build prior knowledge into our systems. But I conclude the exact opposite: that complexity leads to crippling intractability for the search and learning approaches on which Sutton proposes to rely.
Only with the right prior knowledge, the right inductive biases, can we ever get a handle on that complexity.
He says “Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better”. The use of the word ‘only’ highlights the arbitrariness of the claim.
Deep learning wouldn’t succeed without those convolutions and invariances but these are deemed minimal and general enough to be acceptable.
In this way, “The Bitter Lesson” avoids the main question, which is not WHETHER to incorporate human knowledge (because the answer is trivially yes) but WHAT that knowledge should be and WHEN and HOW to use it.
Sutton says “We want AI agents that can discover like we can, not which contain what we have discovered.” Sure, but we are so good at discovering precisely because we are hardwired with the right inductive biases.
The Sweet Lesson of the history of AI is that, while finding the right inductive biases is hard, doing so enables massive progress on otherwise intractable problems.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Shimon Whiteson
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!