Laura Driscoll Profile picture
neuro/ai postdoc https://t.co/KzZoERsv10 #blacklivesmatter (she/they)

Aug 16, 2022, 15 tweets

🚨⏰ 🚨 #TWEEPRINT TIME 🚨⏰ 🚨

πŸ’«πŸŽŠπŸ₯³My postdoc work is now online! πŸŽ‰πŸŒπŸ’«

@shenoystanford @SussilloDavid and I have been working to understand how neural networks perform multiple related/interfering computations using the computation through dynamics framework.
1/15

We identified a neural substrate for compositional computation through reverse engineering multitasking artificial recurrent neural networks. We call these building blocks dynamical motifs, and they can be composed in different ways to implement different tasks.
2/15

To identify shared motifs, we interpolated across static inputs that configured the network to perform different tasks & tracked fixed points for each interpolated input setting. This is something like an empirical bifurcation diagram of a high dimensional dynamical system.
3/15

Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor.
4/15

Fixed points often moved their positions in state space leading to qualitatively different behavior of the dynamical system in different contexts. In this example, an unstable fixed point shifted in state space causing the state to move toward a different stable fixed point.
5/15

Dynamical motifs were reused across contexts in every network design and hyperparameter setting that we studied. Subpopulations of units collectively employed dynamical motifs that were shared when different tasks required similar computations.
6/15

Lesioning subpopulations of units resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs.
7/15

Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This is very cool in the context of critical periods!!
8/15

Cognitive scientists have hypothesized about the possibility of a compositional neural code, where complex neural computations are made up of constituent parts, like words in a sentence. tinyurl.com/computationale…
9/15

This work establishes dynamical motifs as a fundamental unit of compositional computation in RNNs, intermediate between the neuron and the network.
10/15

As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs may guide questions about specialization and generalization across brain regions. tinyurl.com/neuropixels2
11/15

This work developed my intuition about fast learning, critical periods, & modular lesion effects. More new questions than answers! Thrilled to continue this line of work with @leaduncker in our joint theory lab @therealLDlabs [now starting to look for a home institutionπŸ‘€]
12/15

Thanks to my advisors @shenoystanford @SussilloDavid and the lab for creating such a great environment to pursue this work. Thanks to @GuangyuRobert for laying the foundation for this project and the wider community @CosyneMeeting for providing feedback over the years!
13/15

To learn more about dynamical systems, read tinyurl.com/nonlinear-dyna… and follow @stevenstrogatz @eigensteve
14/15

Check out other neuroscience work using the dynamical systems approach
tinyurl.com/dunckerreview @leaduncker
tinyurl.com/vyasreview @SaurabhsNeurons @MattGolub_Neuro
@KanakaRajanPhD @jmourabarbosa @ostojic_srdjan F. Mastrogiuseppe, O.Barak, Shea-Brown & many more!
15/15

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling