, 17 tweets, 7 min read
1/ SciTwitter: I'm very excited to share our new Perspective article out in Nature Neuroscience today!

nature.com/articles/s4159…
2/ In this piece, we argue that neuroscience would benefit from adopting a framework that parallels the approach to designing intelligent systems used in deep learning.
3/ ANN researchers do not attempt to design specific computations by-hand, instead, they design three core components: (1) architectures, (2) objective functions (cost/loss), and (3) learning rules that provide good inductive biases for learning how to do specific computations.
4/ Why is this a good approach for ANNs? Because human-interpretable, hand-designed computations may not be the best solution for many complicated tasks. Instead, by optimising the networks after designing the three components above, non-intuitive solutions can be found.
5/ Similarly, we suggest that the brain's computations are a result of two optimisation processes, evolution and learning within the lifetime. And, like ANNs, we argue that there is no reason to believe that the solutions discovered by the brain are easy to describe/comprehend.
6/ Thus, we suggest neuroscience would benefit from trying to frame our investigations with respect to the architectures, objective functions, and learning rules that have shaped the brain.
7/ Importantly, this is not a claim that the brain is best approximated by a deep convent, or something facile like that. Rather, it's a call to recognise that we may gain much more traction by using these three components to frame our theories and experiments.
8/ As an analogy: when we study the phylogeny of a species in ecology, we frame the data using mechanisms that explain what guided the species' evolution, e.g. natural selection, sexual selection, niche, etc. We don't try to explain the species' phenotype without these concepts.
9/ Similarly, when we try to understand a given brain, we should, whenever we can, frame the computations using the concepts of the architectures, objective fns and learning rules that would have guided the emergence of those computations, both over evolution or in a lifetime.
10/ Another important point: we are not denying the existence of innate behaviour here. Indeed, an important point about deep learning, that is often missed, is that deep learning is not actually a "blank slate" approach.
11/ Only by crafting the right inductive biases for an ANN (via the three components), can we get good performance. Likewise, innate behaviours are intimately linked with these three components, even learning rules (since innate behaviours are still tunable).
12/ We believe that both theorists and experimentalists should start to view neural computation with these three components, using as central conceptual tools for understanding how the computations that they study actually came to be.
13/ Finally, I just want to say that this work was the result of a workshop at the Bellairs Institute of @mcgillu organised in large part by some of the folks at @element_ai. Thanks to both of them! We couldn't have done this without their support.
14/ Thanks to all my co-authors (31 of them)! Special shout out to Denis Therien from @element_ai for being the major organiser, and to Tim Lillicrap and @KordingLab for really pushing forward on the writing with me.
15/ Also, shout out to @colleenjgillon for her wonderful figures (with contributions from @Pieters_Tweet, @somnirons and Andrew Saxe)!
16/ This was truly a joint paper, and all the co-authors contributed to the ideas and writing herein. Thanks everyone for your work on this!
Fin/ CCing the other co-authors I know are on Twitter and who are not tagged yet: @PhilBeaudoin @achristensen56 @SuryaGanguli @KepecsLab @NKriegeskorte @neurograce @kendmil @NeuroNaud @YiotaPoirazi @AnnaSchapiro @dyamins @hisspikeness
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Blake Richards

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!