My Authors
Read all threads
What's deep learning?

The "common usage" definition as of 2019 would be "chains of differentiable parametric layers trained end-to-end with backprop".

But this definition seems overly restrictive to me. It describes *how we do DL today*, not *what it is*.
If you have a convnet and you train its weights with ADMM, is that no longer deep learning?

Is an HMAX model (with learned features) not deep learning?

Is a deep neural network trained greedily layer-by-layer not deep learning?

I say they're all deep learning.
Deep learning refers to an approach to representation learning where your model is a chain of modules (typically a stack / pyramid, hence the notion of depth), each of which could serve as a standalone feature extractor if trained as such.

That's also how I define it in my book.
This stands in contrast to:

1) Things that are not representation learning (e.g. manual feature engineering like SIFT, symbolic AI, etc.)

2) "Shallow learning", where there is a single feature extraction layer.
It does not prescribe a specific learning mechanism (e.g. backprop) or a specific use case (e.g. supervised learning or RL), and it does not require end-to-end joint learning (as opposed to greedy learning).

It's the *what* (nature and structure), not the *how*.
This definition draws a clear boundary: some things are DL, some things aren't.

The 2019 flavors of DNNs are DL, of course. So are DNNs trained with backprop alternatives like ES, ADMM, or virtual gradients.

Genetic programming is not DL. Quicksort is not DL. Nor is SVM.
A single Dense layer is not DL. But a Dense stack is.l DL.

K-means is not DL. But stacking k-means feature extractors is DL.

When in 2011-12 I was doing stacked matrix factorization over matrices of pairwise mutual information of locations in video data, that was deep learning.
Programs typically written by human engineers are not DL. Parametrizing such programs to learn a few constants automatically is still not DL. You need to be doing representation learning with a chain of feature extractors.
By definition, deep learning is a gradual, incremental way to extract representations from data. In its modern incarnation, it's even at least C1 continuous (more typically C inf). That last part isn't essential, but *incrementality* is intrinsic to DL.
So DL is a fundamentally different beast from symbol manipulation and regular programming, which is fundamentally discrete, flow-centric, and doesn't usually involve intermediate data representations.

You could do symbol manipulation with DL, but it involves lots of extra steps.
These are two entirely different takes on data manipulation.

Deep learning isn't just end-to-end gradient descent, but not every program is deep learning either. In fact, deep learning models only represents a tiny, tiny slice of program space.

It can't hurt to look beyond it.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with François Chollet

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!