1/Is scale all you need for AGI?(unlikely).But our new paper "Beyond neural scaling laws:beating power law scaling via data pruning" shows how to achieve much superior exponential decay of error with dataset size rather than slow power law neural scaling arxiv.org/abs/2206.14486
2/ In joint work @MetaAI w/Ben Sorscher, Robert Geirhos, Shashank Shekhar & @arimorcos we show both in theory (via statistical mechanics) and practice how to achieve exponential scaling by only learning on selected data subsets of difficult nonredundant examples(defined properly)
3/ Our statistical mechanics theory of data pruning makes several predictions - including the ability to beat power scaling - which we confirm in ResNets on various tasks (SVHN, CIFAR10, ImageNet) and Vision Transformers fined-tuned on CIFAR10
4/ Then focusing on ImageNet, we performed a large scale benchmarking study of 10 different data-pruning metrics that rank examples from easiest to hardest and tested their efficacy in pruning data to create small data subsets of only the hardest examples to train on
5/ We additionally developed a new unsupervised data pruning metric that does not even require labels, is easy to compute given a pre-trained foundation model, and that out performs all previous metrics on ImageNet, allowing us to train on ~75% of ImageNet without accuracy loss
6/ Overall this work suggests that our current ML practice of collecting large amounts of random data is highly inefficient, leading to huge redundancy in the data, which we show mathematically is the origin of very slow, unsustainable power law scaling of error with dataset size
7/ A better way forward might be the creation of foundation datasets: carefully curated subsets of small amounts of data that are capable of training highly accurate models using far less data than we currently use in our large randomly selected datasets (see discussion in paper)
8/ Indeed, the initial computational cost of creating a foundation dataset through data pruning can be amortized across efficiency gains in training
many downstream models, just as the initial cost of training foundation models is amortized across faster fine-tuning on many tasks

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Surya Ganguli

Surya Ganguli Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @SuryaGanguli

Jul 16, 2021
1/ Our new work: "How many degrees of freedom do we need to train deep networks: a loss landscape perspective." arxiv.org/abs/2107.05802 We present a geometric theory that connects to lottery tickets and a new method: lottery subspaces. w/ @_BrettLarsen @caenopy @stanislavfort
2/ Many methods can train to low loss using very few degrees of freedom (DoF). But why? We show that to train to a small loss L using a small number of random DoF, the number of DoF + the Gaussian width of the loss sublevel set projected onto a sphere around initialization...
3/ Must exceed the total number of parameters, leading to phase transitions in trainability, and suggests why pruning weights at init is harder than pruning later. We also provide methods to measure the high dimensional geometry of loss landscapes through tomographic slicing...
Read 6 tweets
Feb 4, 2021
1/ Super excited to share our work with @drfeifei and @silviocinguetta, lead by the mastermind @agrimgupta92 on Deep Evolutionary Reinforcement Learning (DERL): arxiv.org/abs/2102.02202 which leverages large scale simulations of evolution and learning to...
2/ generate diverse morphologies with embodied intelligence that can exploit the passive physical dynamics of agent environment interactions to rapidly learn complex tasks in an energy efficient manner
3/ We also obtain insights into the dynamics of morphological evolution - here is a lineage tree showing how our evolutionary dynamics can generate multiple diverse morphologies without sacrificing fitness
Read 6 tweets
Mar 20, 2020
1/ New paper in @Nature : “Fundamental bounds on the fidelity of sensory cortical coding” with amazing colleagues: Oleg Rumyantsev, Jérôme Lecoq, Oscar Hernandez, Yanping Zhang, Joan Savall, Radosław Chrapkiewicz, Jane Li, Hongkui Zheng, Mark Schnitzer: nature.com/articles/s4158…
2/ See also here for a free version: rdcu.be/b26wp and tweeprint below ->
3/ We address an old puzzle: namely that when an animal has to discriminate between two visual stimuli, it often can’t do much better than the performance of an ideal observer that only has access to a small number of neurons in the relevant brain region processing those stimuli
Read 18 tweets
Jul 18, 2019
1/ New in @sciencemagazine w/ @KarlDeisseroth lab: science.sciencemag.org/content/early/…: new opsin + multi-photon holography to image ~4000 cells in 3D volumes over 5 cortical layers while also stimulating ~50 neurons to directly drive visual percepts; data analysis and theory reveal…
2/ that visual cortex operates in a highly sensitive critically excitable regime in which stimulating a tiny subset of ~20 cells with similar orientation tuning is sufficient to both selectively recruit a large fraction of similarly responding cells and drive a specific percept
3/ theoretical analysis reveals this very low threshold, for the ignition of both large cell assemblies and perception, is almost as low as it can possibly be while still optimally avoiding false positive percepts driven by fluctuations in spontaneous activity
Read 6 tweets
Nov 29, 2018
1/ Our new #neuroscience paper, "Emergent elasticity in the neural code for space" just appeared in @PNASNews: pnas.org/content/early/… Awesome work lead by @SamOcko, with @kiahhardcastle and @lisa_giocomo . Take home messages...
2/ We ask: how do we learn where we are? two info sources are needed: 1) our recent history of velocity; 2) what landmarks we have encountered. How can neurons/synapses fuse these two sources to build a consistent spatial map as we explore a new place we have never seen before?
3/ We show a simple attractor network with velocity inputs that move an attractor bump and landmark inputs that pin the attractor bump can do this - with Hebbian plasticity from landmark inputs to the attractor network.
Read 11 tweets
Oct 28, 2018
1/ New #deeplearning paper at the intersection of #AI #mathematics #psychology and #neuroscience: A mathematical theory of semantic development in deep neural networks: arxiv.org/abs/1810.10531 Thanks to awesome collaborators Andrew Saxe and Jay McClelland! Image
2/ We study how many phenomena in human semantic cognition arise in deep neural networks, and how these phenomena can be understood analytically in a simple deep linear network. Such phenomena include…
3/ The hierarchical differentiation of concepts over infant semantic development: children acquire broad categorical distinctions before they acquire fine categorical distinctions
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(