, 15 tweets, 7 min read
1/ New tweeprint from my lab! This one is work done by the amazing @guerguiev, and was inspired by the work of @benlansdell and @KordingLab (who was also a collaborator in this project).

arxiv.org/abs/1910.01689
2/ Our focus in this paper is the question of weight alignment. If you try to work out how to efficiently estimate cost fn gradients in biological networks, you will find that it would be great if feedforward pathways and feedback pathways had symmetric synaptic weights.
3/ But, symmetric synaptic weights is not something we can reasonably assume for biological neural networks, since it implies a magic transmission of weights between different pathways. This is known as the "weight transport problem".
4/ The feedback alignment algorithm, wherein you use random fixed weights for feedback pathways, seemed to solve this problem:

nature.com/articles/ncomm…

But, in practice, feedback alignment does not scale up to difficult tasks:

papers.nips.cc/paper/8148-ass…
5/ Thus, we probably still need to think about how learning can happen on feedback pathways to encourage weight symmetry. Interestingly, this is a *causal inference* problem, as you are trying to get the feedback pathway to reflect the causal structure of the feedforward pathway.
6/ We were inspired by a recent paper from @benlansdell and @KordingLab, which showed that you can use the discontinuity in spiking neurons to do causal inference:

biorxiv.org/content/10.110…
7/ This work uses something called "regression discontinuity design" (RDD) from econometrics. Essentially, this involves fitting a piecewise linear model around some discontinuous threshold, and using the difference in the model intercepts to infer causality.
8/ Inspired by this, we realized that this could allow a neuron to infer its causal impact on its downstream connections, by fitting a piecewise linear model of its feedback EPSPs around its spike threshold.
9/ To explore this, we implemented a convnet, but trained its feedback pathways using a matched leaky-integrate-and-fire network (LIF). Basically, we did feedforward training in the convnet, then feedback training in the LIF net, using the RDD algorithm for the feedback synapses.
10/ So, what happens? First, RDD does a much better job of aligning signs than feedback alignment. It also does a better job than a recent algorithm from Akrout et al. (arxiv.org/abs/1904.05391) that uses correlations in activity to learn the feedback.
11/ In fact, RDD reduces the negative trace of the product of the FF and FB matrices, which is a "self-alignment" term one can derive from a basic weight alignment cost fn. (See for example the great work of Daniel Kunin, @jbloom22 and others here: arxiv.org/abs/1901.08168)
12/ As a result, by using RDD to train the feedback weights, we can do better than feedback alignment on learning datasets that are more challenging than MNIST:
13/ Thus, our work provides a proof-of-concept, showing that neurons could perform causal inference about their impact on downstream neurons using the discontinuity in their spikes. And this can be used to learn better feedback pathways for gradient propagation.
14/ Is this what the brain does? Who knows! This is a model, it's a potential hypothesis. We put this out here for people to think about more. But, note, one interesting prediction that emerges from this is reverse STDP at feedback synapses.
Fin/ More broadly, I think that there is a very interesting idea here that I want more people to consider: neurons may want to learn about their causal impact on other neurons in order to do credit assignment.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Blake Richards
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!