How useful is a connectome? We show that you can predict quite a bit about the neural activity of a circuit from just measurements of its connectivity. Led by star graduate student @lappalainenjk, collaboration with @jakhmack. #connectome biorxiv.org/cgi/content/sh… 1/n
How much does a connectome tell you about neural computations? This has been hotly debated (e.g. @SebastianSeung @TonyMovshon
scientificamerican.com/article/c-eleg… )?
Tricky because you don’t know much about single neuron + synapse dynamics, neuromodulation, ... ! 2/n
Here we show that you can predict neural activity from neural connectivity measurements-- provided you can guess what the network is supposed to do. With the weight matrix essentially given by the connectome, we use #taskoptimization to estimate single neuron parameters. 3/n
We built a convolutional recurrent network of the fly visual system--on a hexagonal grid, matching the columnar structure of the optic lobe. Weights (connections + filter weights) come from the connectome: A deep neural network which precisely maps onto a real brain circuit! 4/n
Our connectome-constrained “deep mechanistic network” (DMN) has 64 identified cell-types, 44K neurons + over 1 Mio. connections.
We trained its free parameters (eg single-cell + synapse dynamics) on a task we know flies can do well: estimating motion from naturalistic inputs. 5/n
Each model-unit corresponds to a real cell, allowing us to compare neural activity to experimental measurements across 24 studies.
The DMN accurately captures contrast selectivity for all cell-types for which that has been reported, and makes predictions for all other ones. 6/n
Task-performing DMNs capture the cardinal direction tuning of the famous T4 (ON-motion) and the T5 (OFF-motion) cells.
Again: The model was purely connectome-constrained and task-optimized, not fit to any neural recordings! 7/n
Of course, parameters are not uniquely constrained by connectome + task: Training many DMNs yields hypotheses for different mechanisms. As an example, we find clusters of DMNs on natural inputs, with different clusters corresponding to differently tuned cells (here: T4c). 8/n
We can trace differences in single-cell tuning back to the differences in the tuning in their inputs: This shows how measuring the activity of any one neuron can constrain tuning of several others (with our approach), and rule out whole hypotheses/model-classes! 9/n
This trick relies on models being aligned by connectome-constraints: This works best when connectivity is sparse (almost always true in biological neural networks), or if one has some information about strength of connections. In our case, it is a combination of both effects. 10/
We provide a discovery tool for generating and testing hypotheses about neural computations with connectomes!
Details + mega supplement + lots more:
biorxiv.org/cgi/content/sh…
@lappalainenjk @jakhmack F Tschopp @MasonBMcGill @EGruntman @sridhama A Nern, S Takemura, K Shinomiya
Come and hear about this work at 4pm today at the Connectomics workshop at @CosyneMeeting!
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.