, 14 tweets, 3 min read Read on Twitter
1/ While this will play well (and get cited a lot) among the anti-#deeplearning holdouts, I was left a bit underwhelmed. I wanted to find some interesting edge cases where DL is not working (so we can work out solutions), but instead got a set of pretty unreasonable comparisons
2/ The deep learning models are tiny (4 conv layers) with justification that it works for MNIST. Everything works for MNIST! Linear regression works for MNIST!

xiaoliangbai.com/2017/02/01/ten…

We know in complex images deeper and more complex is vastly better, and does less overfitting!
3/ The linear and non-deep models are not "apples to apples" either though. This isn't deep learning vs simple models, it is deep learning vs incredibly complex feature engineering built up over decades of research.
4/ They compare DL trained on image data vs regression and SVMs trained on high-level extracted features derived from scientific knowledge. They had quantitative grey and white matter volume measurements, atlas derived features, activity patterns in fMRI, and so on.
5/ So here is a question. If the major difference between young and old brains in the brain volume, and you have a linear model with brain volume as an input, should you expect deep learning to outperform it?
6/ Deep learning will *never* beat an optimal set of engineered features. The benefits or deep learning are:
1: you don't need decades of painstaking research to find the right features (neuroscience is pretty much the pinnacle of doing this out of all domains)
7/ and 2: you can maybe find better features you haven't thought of.

So all they can possibly test here is 2. Does deep learning find better features than decades and billions of dollars of neuroscience research...
8/ their answer is 'no ... it does about as well as billions of dollars and decades of research, when using deep learning models not suited to the task and modest sized datasets (by DL standards).'

The latter point is fine, they are arguing that "with current scale data..."
9/ but the former is ... supportive of DL? I'd see this as a massive win.
10/ so, as far as I can see, there is nothing actually technically wrong here (although I'm generically worried about multiple hypothesis testing in the non-deep models), but the discussion (while pretty comprehensive and aware of various likely challenges to the results)...
11/ kind of ignores the most important part (that the features they used are based on an enormous body of human knowledge), and then tries to draw conclusions about what sort of image tasks DL might not work in.
12/ they say "since the brain has a meaningful topology, translation invariance and narrow field of view representations may not be relevant".

Sure, that may be the case, but showing that DL works as well as other approaches doesn't seem to support this.
13/ my take: "even a weak form of DL appears to learn features equal to best human understanding of brain topology + function. This appears to be fertile ground for exploration, given DL properties like translation invariance don't a priori appear very relevant in neuroscience."
14/ sign-off: I have seen a lot of talk recently about Twitter pile-ons.

I don't think this is a bad paper or the results are irrelevant. I just disagree with the conclusions, which is pretty standard collegial science.

I hope, anyway. Let me know if I'm "part of the problem".
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Luke Oakden-Rayner
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!