2/ Our prior theory authors.elsevier.com/c/1f~Ze3BtfH1Z… quantitatively explains why few hexagonal grid cells were found in the work; many choices were made which prior theory proved don’t lead to hexagonal grids; when 2 well understood choices are made grids appear robustly ~100% of the time
3/ Also corrections: (1) difference of Gaussian place cells do lead to hexagonal grids; (2) multiple bump place cells at one scale also; (3) hexagonal grids are robust to place cell scale; (4) Gaussian interactions can yield periodic patterns;
4/ (5) difference of Gaussian input patterns to place cells from grid cells are not biological implausible; more details in preprint tinyurl.com/ynpxtkpm and in the replies to specific parts of the thread copied below...
1/Is scale all you need for AGI?(unlikely).But our new paper "Beyond neural scaling laws:beating power law scaling via data pruning" shows how to achieve much superior exponential decay of error with dataset size rather than slow power law neural scaling arxiv.org/abs/2206.14486
2/ In joint work @MetaAI w/Ben Sorscher, Robert Geirhos, Shashank Shekhar & @arimorcos we show both in theory (via statistical mechanics) and practice how to achieve exponential scaling by only learning on selected data subsets of difficult nonredundant examples(defined properly)
3/ Our statistical mechanics theory of data pruning makes several predictions - including the ability to beat power scaling - which we confirm in ResNets on various tasks (SVHN, CIFAR10, ImageNet) and Vision Transformers fined-tuned on CIFAR10
1/ Our new work: "How many degrees of freedom do we need to train deep networks: a loss landscape perspective." arxiv.org/abs/2107.05802 We present a geometric theory that connects to lottery tickets and a new method: lottery subspaces. w/ @_BrettLarsen@caenopy@stanislavfort
2/ Many methods can train to low loss using very few degrees of freedom (DoF). But why? We show that to train to a small loss L using a small number of random DoF, the number of DoF + the Gaussian width of the loss sublevel set projected onto a sphere around initialization...
3/ Must exceed the total number of parameters, leading to phase transitions in trainability, and suggests why pruning weights at init is harder than pruning later. We also provide methods to measure the high dimensional geometry of loss landscapes through tomographic slicing...
1/ Super excited to share our work with @drfeifei and @silviocinguetta, lead by the mastermind @agrimgupta92 on Deep Evolutionary Reinforcement Learning (DERL): arxiv.org/abs/2102.02202 which leverages large scale simulations of evolution and learning to...
2/ generate diverse morphologies with embodied intelligence that can exploit the passive physical dynamics of agent environment interactions to rapidly learn complex tasks in an energy efficient manner
3/ We also obtain insights into the dynamics of morphological evolution - here is a lineage tree showing how our evolutionary dynamics can generate multiple diverse morphologies without sacrificing fitness
1/ New paper in @Nature : “Fundamental bounds on the fidelity of sensory cortical coding” with amazing colleagues: Oleg Rumyantsev, Jérôme Lecoq, Oscar Hernandez, Yanping Zhang, Joan Savall, Radosław Chrapkiewicz, Jane Li, Hongkui Zheng, Mark Schnitzer: nature.com/articles/s4158…
2/ See also here for a free version: rdcu.be/b26wp and tweeprint below ->
3/ We address an old puzzle: namely that when an animal has to discriminate between two visual stimuli, it often can’t do much better than the performance of an ideal observer that only has access to a small number of neurons in the relevant brain region processing those stimuli
1/ New in @sciencemagazine w/ @KarlDeisseroth lab: science.sciencemag.org/content/early/…: new opsin + multi-photon holography to image ~4000 cells in 3D volumes over 5 cortical layers while also stimulating ~50 neurons to directly drive visual percepts; data analysis and theory reveal…
2/ that visual cortex operates in a highly sensitive critically excitable regime in which stimulating a tiny subset of ~20 cells with similar orientation tuning is sufficient to both selectively recruit a large fraction of similarly responding cells and drive a specific percept
3/ theoretical analysis reveals this very low threshold, for the ignition of both large cell assemblies and perception, is almost as low as it can possibly be while still optimally avoiding false positive percepts driven by fluctuations in spontaneous activity
2/ We ask: how do we learn where we are? two info sources are needed: 1) our recent history of velocity; 2) what landmarks we have encountered. How can neurons/synapses fuse these two sources to build a consistent spatial map as we explore a new place we have never seen before?
3/ We show a simple attractor network with velocity inputs that move an attractor bump and landmark inputs that pin the attractor bump can do this - with Hebbian plasticity from landmark inputs to the attractor network.