Brent Doiron Profile picture
Apr 7, 2021 49 tweets 8 min read Read on X
Today I am starting a short reading course focusing on some of my favorite, classic papers in theoretical neuroscience. I thought I would tweet my reviews. Up first: Dynamics of Encoding in a Population of Neurons by Bruce Knight (1972). rupress.org/jgp/article-pd…
Knight shows how variability in the response across the populations helps a (uncoupled) population of spiking neurons (with "forgetfulness") faithfully encode a dynamic stimulus.
He also shows the subtleties in how to link inter-spike intervals from a single neuron to the spike probability density (i.e rate) over the population.
A true classic piece of work with some great perturbative analysis. Also, it is well written (like so many of the old papers).
Student take away: (1) membrane leak is bad because it can synchronize the spike times of a population of neurons to a common, dynamic stimulus. This compromises any population encoding of the stimulus.
(2): Response variability smears out the neuron responses so that the leaky population as a whole can now auto-encode the signal.
So two wrongs makes a right. Leak (forgetfulness) + noise (unreliability) = good (auto) code. Biology finds a way (just like Jeff Goldblum told us in Jurassic Park).
1/7 For my second paper I chose "Theoretical Reconstruction of Field Potentials and Dendrodendritic Synaptic Interactions in Olfactory Bulb" By Will Rall and Gordon Shepard (Journal of Neurophysiology, 1968). citeseerx.ist.psu.edu/viewdoc/downlo…
2/7 In this gem Rall and Shepard review their work outlined in several papers (in the 60s) where they proposed that in the olfactory bulb there are special dendrodendritic synaptic connections between mitral (excitatory) and granule (inhibitory) cells.
3/7 They justified this by constructing a compartmental model of a mitral cell and showed that to account for the late phase extracellular field potential recorded in experiment one needs a recurrent mitral cell/granule cell connection in the dendritic layer.
4/7 While such dendrodendritic connections were unknown to exist at the time of the modelling, they were discovered by Reese and Brightman in a set of parallel electromicrographic studies (1965-66).
5/7 To put forth their prediction Rall had to develop the compartmental modeling framework which is now the basis of many neuron simulation environments (like NEURON).
6/7 Dendrodendritic mitral-granule cell synapses are now a classic example of lateral inhibition, where one mitral cell can inhibit another (and itself). Lateral inhibition is a common circuit motif used throughout the nervous system for neuron-neuron competition.
7/7 In their work Rall and Shepard gave us a successful blueprint for how a theorist (Rall) and an experimentalist (Shepard) can collaborate to tackle hard problems in neuroscience.
1/7 For my third paper I chose "Dynamics of Pattern Formation in Lateral-Inhibition Type Neural Fields" by Shun-ichi Amari; Biological Cybernetics, 1977 ini.rub.de/upload/file/15…
2/7 In this tour de force work Amari characterized several solutions to spatially extended neural field models, including self sustaining localized activity (now called 'bumps'), oscillating dynamics (in two layer E-I networks), and traveling wave solutions.
3/7 Amari identified the importance of inhibition in shaping these solutions - both in terms of the spatial and temporal scales of inhibitory interactions (as compared to excitatory).
4/7 At the time the study of spatial-temporal systems revolved around partially differential equations, where interactions are locally defined. Neural field equations are integro-differential equations, due to the non-local interactions in the nervous system (i.e axons).
5/7 Integro-differential equations are a different beast altogether. Amari developed clever ways to reduce the problem to the 1D dynamics of the bump width, or to construct simplified phase portraits to study oscillations (all requiring a Heaviside neural transfer).
6/7 The study of many of these types spatio-temporal dynamics are still in vogue today. This is especially true with imaging techniques providing us increasingly fine spatial-temporal resolution to probe the nervous system.
7/7 His parting sentence is still relevant today "It seems important to unify statistical neurodynamics, field theory and self-organization theory."
1/7 For my fourth paper in my classics in theoretical neuroscience reading course I chose "The space of interactions in neural network models" by Elizabeth Gardner (1988). iopscience.iop.org/article/10.108…
2/7 In this mathematically impressive work Gardner explores the storage capacity of Hopfield networks. When the coupling rule obeys the symmetric Hebb rule proposed in the original work is was known that capacity is roughly p=0.14N (p is the number of patterns; N is neurons).
3/7 Gardner uses techniques from spin-glass theory to explore other coupling solutions. This involves calculating some heavy-duty integrals in the space of couplings, and calculating the "volume" of the space that would yield a solution as a function of p/N.
4/7 Gardner shows that for uncorrelated patterns with a vanishing robustness to "noise", one can achieve a capacity of p=2N, much higher than what one gets with symmetric Hebb (this is all in the large N limit).
5/7 Incidentally, Gardner's integrals are very complex and her derivation seems to start to become unwieldy, only to finally collapse (in this case to the number 2). I love these kinds of calculations - they seem like a magic trick with a big 'Ta-da' at the end.
6/7 While the majority of the paper is really an existence derivation, Gardner is pragmatic at the end and shows how a simple backprop learning algorithm can actually find the solution that is proven to exist.
7/7 Gardner's work is an example of how statistical mechanics can offer deep insights to neuroscience (or at least neuro-adjancant) problems. While the calculations are hard, they are now an essential roadmap for those still pushing on important questions in neural networks.
1/7 For the fifth paper in my neurotheory classics course I chose "Analysis of Neuronal Excitability and Oscillation" by John Rinzel and Bard Ermentrout (1989; 1998). researchgate.net/profile/Bard-E…
2/7 In this book chapter Rinzel and Ermentrout review emerging techniques in dynamical systems (at the time) to describe various phenomena in cellular (mostly) and network neuroscience.
3/7 In the 80s the mathematical neuroscience community was quick to embrace (and sometimes build) ways to describe dynamics using qualitative, geometric analysis of solutions. This opened the door for thoses that could (or would) not handle older methods based on asymptotics.
4/7 Local dynamical systems theory (linearization) is now well used throughout cellular and systems neuroscience. However, Rinzel and Ermentrout show how global, fully nonlinear dynamics can be understood using phase plane techniques.
5/7 They show how Hodgkin's excitability classes can be understood through how a rest state loses stability and gives way to limit cycle dynamics (repetitive spiking). These cases the limit cycles are a global, nonlinear attractor and cannot be captured by a linear analysis.
6/7 Bursting dynamics (repetitive spiking separated by bouts of silence) is understood fast (spiking) and slow (transitions from spiking to silence) sub-dynamics. That way a nonlinear analysis of lower dimensional spaces are stitched together to capture the full dynamics.
7/7 Rinzel and Ermentrout's review outlines in a very accessible way analysis techniques that are still in use today. Further, many of these insights will become in use as the system neuroscience communities continue to analyse their data through a dynamical systems lens.
1/7 For my sixth paper in my neurotheory classics series I chose "A Framework for Mesencephalic Dopamine Systems Based on Predictive Hebbian Learning" by Montague, Dayan, and Sejnowski (1996). cs.cmu.edu/afs/cs/academi…
2/7 In this paper the authors apply ideas from temporal difference reinforcement learning to the newly recorded data from dopaminergic neurons in the ventral tegmental area (Schultz and Romo).
3/7 These data show how the neuronal response to reward can be associated/shifted to the presentation of cue stimulus which precedes (and thus predicts) eventual reward. The data is then a complicated mix of associative memory, predictive coding, and reward-based learning.
4/7 Despite these complexities the authors explain the data with a very compact theory of learning based on a simple comparison of actual reward (at a fixed time) and the temporal difference between expected reward at that time and the immediate past time.
5/7 While the algorithm is exceedingly simple, its behavior is very rich. In particular, it shows how reward prediction errors on early trials can drive learning of an association between reward and a stimulus that reliably precedes reward (with a big delay).
6/7 Reinforcement learning is now squarely at the interface between machine learning and neuroscience, and this paper is an early, and very important, foundational link in that chain.
7/7 But perhaps the true value of the model is the ease of generalization to new stimulus-reward associations in different experiments. Great theory should open many doors, far more than can be walked through in any one study.
1/7 For my seventh paper in my neurotheory classics course I reviewed "Model of Global Spontaneous Activity and Local Structured Activity During Delay
Periods in the Cerebral Cortex" (1997) by Daniel Amit and Nicolas Brunel. webhome.phy.duke.edu/~nb170/pdfs/am…
2/7 In this study Amit and Brunel start to tackle the problem of how to build a mean field theory of recurrently coupled networks of spiking neuron models (integrate-and-fire type).
3/7 In particular they explicitly consider how the network contributes to both the mean and fluctuations of neuronal input. With this they show how recurrent inhibition can stabilize an equilibrium state through both lowering the mean input while increasing its fluctuations.
4/7 They further show that if the wiring has a clustered structure (similarly 'tuned' neurons couple strongly with one another) then attractor solutions are also stable - a subgroup of neurons can have a stable firing rate that is higher than the rest of the network.
5/7 At the time there was extensive work in how (uncoupled) spiking neurons that are stochastically forced transfer both signal and noise. Amit and Brunel open this vein of research to network science, vastly expanding its reach.
6/7 Further, at the time there was an (incorrect) idea that classic 'mean field' techniques could not be used to study networks of spiking neurons. However, by including a treatment of dynamic fluctuations, Amit and Brunel gave us a path forward to just such a theory.
7/7 This paper is really just one example of this line of work with Brunel and Hakim (Neural Comp. 1999) and Brunel (JCNS, 2000) giving more insights. Mean field theory for networks of spiking neurons is now standard, and these papers are a foundational brick in this wall.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Brent Doiron

Brent Doiron Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @BrentDoiron

Sep 14, 2022
1/9 Share Time. Here is an 11 year old review from a paper that Ashok Litwin-Kumar and I submitted during his PhD. The result was a soft reject - but one of the reviewers gave perhaps the funniest review ever! I will post choice excerpts below.
2/9 "I like this study. It is a thoughtful work that examines correlations in spiking activity between neurons, across time scales. That said let me murder this manuscript, in the hope that it will be greatly improved on rebirth. For it is certainly in need of improvement."
3/9Ish "if the equation does not terminate a paragraph then the next line should not be indented. It causes pain. Please do not make the reader dyspeptic. It is late at night, I have been working hard on your manuscript for over a day and I have run out of bloody gin."
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(