, 15 tweets, 16 min read Read on Twitter
#tweeprint
Universality and individuality in neural dynamics across large populations of recurrent networks
arxiv.org/abs/1907.08549.

With fantastic collaborators @niru_m, @ItsNeuronal, @MattGolub_Neuro, @SuryaGanguli.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli Many recent studies find striking similarities between representations in biological brains 🧠and artificial neural networks 🤖 trained to solve analogous tasks.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli This is pretty crazy when you think about it because brains and ANNs have serious differences in their biophysical/architectural details.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli This raises a fundamental question: How should we interpret the often striking representational similarity of biological and artificial networks? 👩‍🔬👨‍🔬
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli While the technology is not (yet!) there to address this question with biological brains, we can begin to address it theoretically by comparing various ANN architectures’ representations and dynamics against each other. ⚖️
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli E.g. when you train a recurrent network on the same task with different architectures, would you expect the solutions to look different or the same? In what ways?
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli We trained thousands of RNNs to solve simple tasks, to see whether these comparisons (and scientific conclusions) would be sensitive to particularities of modeling choices, e.g. LSTM vs. GRU vs vanilla RNN.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli We found evidence for both individuality 💅 and universality 🌌 in the solutions across different RNN architectures, with the geometry of representations tending to be more varied and the dynamics tending to be more universal.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli Here’s an example from the preprint, a simple task to produce a sine wave whose frequency is proportional to a given input (command) frequency. We trained recurrent networks to solve this task
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli Example state space trajectories show some differences across architecture
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli We compared the geometry of network representations using canonical correlation analysis (CCA). CCA suggests that networks are sensitive to these modeling choices (each dot is a network) if intracluster distances gives a measure for intercluster distance.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli Next, we asked if we would reach the same conclusion if we compared the underlying dynamics 🌀. To do this, we used tools from dynamical systems theory (fixed points and linearization) to extract a simple dynamical portrait for each network.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli When we then compared network distances using fixed point topology, we found that there was no real difference across architecture (e), suggesting the topology may be universal.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli Linearization of the dynamics reveals a common motif with some small differences across architectures. The common motif is a nearly linear solution to produce the oscillations. The varied motif is a small degree of nonlinearity used to generate the oscillations.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli We hope this kind of in silico 🖥️ study helps advance a discussion about the use of ANNs in neuroscience. There are more examples in the preprint arxiv.org/abs/1907.08549.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to David Sussillo ☝️🤓
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!