Our paper just came out! 🥳🙌🏽LFADS, a deep learning approach to understand neural population activity.🧠Many potential applications. Perhaps a primer is helpful?👨🏽‍🏫Please RT! @SussilloDavid @djoshea nature.com/articles/s4159… (1/n)
[also, @djoshea put together a great Matlab codepack and tutorials to help get people up and running, check it out! lfads.github.io 💾 And here’s a link to bypass paywall: rdcu.be/6Wji ]
High-level goal: estimate single-trial ‘firing rates’ from the observed spiking activity of a neural population. We begin with background / key assumptions:
First: activity in many brain areas is typically lower-dimensional than the number of observed neurons. (E.g., in M1, 200 neurons is usually well-captured by 10-20 latent factors.) We call this abstract, low-D representation the ‘neural state.’
A simple schematic example (Cunningham & Yu, Nat Neuro 2014): with 3 neurons, we can plot activity in a 3-D state space. Each axis is a neuron’s firing rate. The 3-D neural activity is largely confined to a 2-D plane.
Second: neural states often evolve in time by following consistent rules, which we call ‘dynamics.’ In an autonomous, noiseless dynamical system, changes in state are a (potentially nonlinear) function of the current state.
Put another way: if you know the current state, you can largely predict how the state will evolve in time.
A now-classic example is Churchland*, Cunningham* et al., Nature 2012, who looked at neural state in M1 during well-prepared movements. They found that consistent dynamics governed temporal-evolution in neural state space for a wide variety of movement types.
Their findings suggest that if you know a system’s dynamics (ds/dt = f(s)), you should be able to describe how population activity evolves for any movement condition based solely on an initial state s(0).
One of @SussilloDavid’s (many) key insights was that if we could accurately model f(s), we could potentially describe /any individual trial/ simply by finding the right initial condition s(0).
To model f(s), we use a recurrent neural network, which is essentially a nonlinear dynamical system. The RNN’s dynamics are set by adjusting its recurrent connections. It helps me to picture the RNN’s state vector, in which each element is the activity of an artificial unit.
Conceptually, you can then also think of ‘unrolling’ the RNN in time - its state evolution is completely determined by its dynamics (set by the recurrent connections, which are fixed after training). Here’s a simplified schematic:
The LFADS architecture (simplified) uses two RNNs. The ‘generator’ tries to replicate the neural population’s dynamics. The ‘encoder’ tries to take a given trial’s spiking activity and compress it into the proper initial state g(0).
Note the slight difference here between g(t) and s(t). g(t) (artificial) is the generic dynamical system we’ve trained to mimic s(t) (neural).
The neurons’ rates r(t) are taken as linear readouts from the underlying state s(t), followed by an exponential nonlinearity. The actual observed spikes n(t) are modeled as Poisson samples from the underlying rates.
Once the encoder spits out g(0), no more information is passed from encoder->generator. The generator attempts to reconstruct the observed data based solely on g(0). This forces it to model the neural population’s dynamics.
This structure is known as a sequential autoencoder, and it has many other applications in deep learning for data with temporal structure. Note this is all unsupervised - there’s no information about behavioral condition, etc. being fed into the model.
We think of these rates r(t) as ‘de-noised’ versions of the observed activity, where r(t) must be consistent with being generated by an underlying dynamical system.
OK, what does all this buy us? In the paper we go through several forms of validation to show this is a good idea.
We found that the single-trial, ‘de-noised’ rates were quite consistent across trials of a given condition (and are similar to the condition-averaged rates), despite LFADS having no knowledge of the conditions/trial identities.
Another key validation is that these rates were vastly more informative about behavior (arm movements, reaction times) on a millisecond timescale than previous methods.
Surprisingly, as seen above, LFADS worked well even with limited population sizes. With 25-50 neurons, we could decode behavior as well as if we used 200 neurons and previous common methods.
Further validation was that the neural states inferred by LFADS were informative about held-out neurons that it had not been trained to model. This suggests we are inferring the state of a broader network than just the particular observed neurons.
We could also uncover previously-found dynamics (Churchland*, Cunningham*, et al.), but now clearly see that structure on individual trials (~2300 trials).
Importantly, we found we could hold out entire condition sets/movement angles when training LFADS, and the model could still accurately describe held-out movements just by finding the right initial states. This strongly argues that dynamics are consistent across conditions.
We’ll have follow-ups covering more advanced applications, like ‘neural stitching’ to link separately recorded neural populations (spanning months) by a single dynamical model, and inferring inputs to neural populations to account for non-autonomous dynamics.
This would’ve been impossible without mentors like @shenoystanford, Jaimie Henderson, @neuroleigh, & Larry Abbott, who encourage collaboration and free-flow of (crazy) ideas.
Shout out to the rest of the awesome team: @collinsljas Jozefowicz @sergeydoestweet Kao @EricMTrautmann @MattAntimatt Ryu. Also to two talented undergrads, @_yahiaali & @amykjli5, for their help with graphics/organization for this post.
(This is the first of a few posts. So follow us if you’re interested in hearing more, and also come see us at SfN to see the latest & greatest! 🧐) Fin.
Missing some Tweet in this thread?
You can try to force a refresh.

# Like this thread? Get email updates or save it to PDF!

###### Subscribe to Chethan Pandarinath

Get real-time email alerts when new unrolls are available from this author!

###### This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

# Try unrolling a thread yourself!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" `@threadreaderapp unroll`