#tweeprint
Who tells the #hippocampus what and when to learn? Our latest article together with @adriamilcar @IvanRaikov7 @rennocosta @annamurav @SolteszLab @PaulVerschure, is out in @TrendsCognSci. cell.com/trends/cogniti…
Link to open access:
authors.elsevier.com/c/1cype4sIRvHk…
We describe the entorhinal-hippocampal circuit (EHC) components enabling self-supervised learning. Cortical projections enter the hippocampus via the entorhinal cortex and loop over DG and CA fields, functioning as a comparator to reconstruct its inputs(see Lörincz & Buzsáki 2000
How can it approximate raw and reconstructed inputs? GABAergic neurons activation relate to the learning stage and their projections mostly counter-current to the perforant pathway, suggesting they are part of a circuit implementing backpropagation of the error within the EHC.
What are the advantages? These ingredients allow the hippocampus to autonomously build and update conjunctive representations of the environment (episodes).
Thus, this simple rule might be sufficient to trigger many of the physiological benchmarks found experimentally.
In an accompanying article, we implemented the simplest self-supervised model of entorhinal rate reconstruction and measured the cells tuning to spatial locations and responses to environmental modifications.
cell.com/iscience/fullt…
We observed that this single reconstruction function was sufficient to trigger rate remapping, adaptation to environmental morphing, place-field expansion, novelty detection, among others.
Surprisingly, when testing the model to reconstruct entorhinal inputs during environmental modifications, we observed that predicted grid cells' firing fields tend to expand (see Barry et al. 2012)
We then simulated an elongation of the arena (O’Keefe & Burgess 1996) as an interpolated expansion of the LEC rate maps and periodic extension of the MEC rate maps. This elongation generated an increase in both the number and size of place fields, across all layers.
Finally, the error magnitude revealed to be a good novelty signal driving learning. Indeed, re-learning of either familiar or novel environments plateaus faster than for moderately modified ones.
We have set a GitLab repository sharing the model implementation and response analysis:
gitlab.com/diogo.santos.p…
Please try it out.
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.