1) Many thought long-term memories and stable task performance were related to stable neuronal representations. Surprisingly recent experiments showed that neural activity in several brain regions continuously change
even after animals have fully learned and stably perform their tasks. The underlying mechanisms and dynamics of this “representational drift” remain largely unknown.
2) We focused on drift in a neural population that learns to represent stimuli in a way that optimizes a representational objective. We hypothesized that if this objective has degenerate optima, noisy synaptic updates during learning will drive the network
to explore the synaptic weight space that corresponds to (near-)optimal neural representations. This means that the neural representation will drift within the space of optimal representations.
3) Hebbian/anti-Hebbian networks provide a concrete example of such networks and are ideal to study in many ways: they are biologically plausible, they optimize similarity matching objectives,
and they learn localized RFs that tile the input data manifold, providing a minimum model for hippocampal place cells and neurons in posterior parietal cortex.
4) We explored the long-term dynamics of learned receptive fields in these network in the presence of synaptic noise. We found that the drifting receptive fields can be characterized by a coordinated random walk,
with the effective diffusion constants depending on various parameters such as learning rate, noise amplitude, and input statistics.
5) Despite the drift, the representational similarity of population codes is stable over time. Our model recapitulates recent experimental observations in hippocampus and posterior parietal cortex, and
makes testable predictions that we tested on data and further can be probed in future experiments.
• • •
Missing some Tweet in this thread? You can try to
force a refresh