dr. jack morris Profile picture
Jun 3, 2025 10 tweets 5 min read Read on X
new paper from our work at Meta!

**GPT-style language models memorize 3.6 bits per param**

we compute capacity by measuring total bits memorized, using some theory from Shannon (1953)

shockingly, the memorization-datasize curves look like this:
___________
/
/

(🧵)Image
Image
this all started from a quest to come up with a proper measurement of model memorization

it's hard to compute *per-example* memorization, because models "share" info between datapoints

so we start with random uniform strings, where sharing isn't possible. and we get this: Image
we then compute the capacity of different models
(GPT models with varying numbers of layers and hidden dimensions)

averaged over hundreds of models in fp32, we get the following curve, indicating a linear trend of around 3.6 bits-per-parameter, regardless of the exact details: Image
we train all of our models until they "saturate" which usually happens around 1M steps using a very large batch size

models memorize the same amount, regardless of training datasize

meaning they have fixed capacity and instead "spread it thinner" when trained on more examples Image
this gives a pretty good explanation into how models learn

in particular, it explains grokking

grokking occurs *exactly* when capacity saturates. this is where models can't perfectly fit every training example, so they have to share info bt examples in a smart way Image
Image
we also compute capacity in bf16 and it drops a bit, to 3.5ish.

but that's a relative increase in bitwise usage (11% of bits or so to 22% of bits)

(my first thought was that transformers are doing a bad job of using params efficiently, but now im not sure. it's not *that* bad)
when we train on text data, the curves look different

models memorize examples to the extent that they can fit them in their parameters

beyond this point, the models discard per-example mem. in favor of shared info (*generalization*)

see how the lines start to slope downward: Image
- hrunning these experiments in a clean setting with perfectly deduplicated texts tells us a lot about privacy:

- once capacity is sufficiently saturated, the **test examples** are slightly more extractable than the training examples -- maybe extraction is a bit of a myth?
- the most extracted examples are the ones with really rare tokens, typically data from other languages that slipped into the training set
- membership inference is much easier than extractionImage
Image
and finally we can compute membership inference success rate across all our models, ending up with this scaling law 👇

main takeaway: models trained on massive datasets (e.g. every LLM that comes out) can't memorize their training data

there's simply not enough capacity Image
this was a really fun project with lots of collaborators across various institutions. it took a long time but was definitely worth it, and i learned a lot!

also thanks to everyone who gave us feedback along the way :-)

now check out the paper: arxiv.org/abs/2505.24832

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with dr. jack morris

dr. jack morris Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jxmnop

Aug 13, 2025
OpenAI hasn’t open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only...

or is it?

turns out that underneath the surface, there is still a strong base model. so we extracted it.

introducing gpt-oss-20b-base 🧵Image
Image
if you're not familiar with base models: here are some samples comparing our new model to the original!

we basically reversed the alignment part of LLM training, so we have something that produces natural-looking text again.

the outputs can be pretty random 🤷‍♂️ Image
ALIGNMENT

turning gpt-oss back into a base model appears to have trivially reversed its alignment

it will tell us how to build a bomb. it will list all the curse words it knows. it will plan a robbery for me. Image
Read 14 tweets
Aug 8, 2025
curious about the training data of OpenAI's new gpt-oss models? i was too.

so i generated 10M examples from gpt-oss-20b, ran some analysis, and the results were... pretty bizarre

time for a deep dive 🧵 Image
here's a map of the embedded generations

the model loves math and code. i prompt with nothing and yet it always reasons. it just talks about math and code, and mostly in English

math – probability, ML, PDEs, topology, diffeq
code – agentic software, competitive programming, data scienceImage
Image
first thing to notice is that practically none of the generations resemble natural webtext. but surprisingly none of them look like normal chatbot interactions either

this thing is clearly trained via RL to think and solve tasks for specific reasoning benchmarks. nothing else.
Read 14 tweets
Jun 24, 2025
In the beginning, there was BERT.

Eventually BERT gave rise to RoBERTa. Then, DeBERTa. Later, ModernBERT.

And now, NeoBERT. The new state-of-the-art small-sized encoder: Image
the key insight, i think, is using an optimal depth-to-width ratio for the transformer architecture. and training on good data. a lot of good data.

even though NeoBERT has slightly more parameters, it's still faster AND more effective than ModernBERT for long sequences: Image
like many important advancements in deep learning, NeoBERT arose from running lots of tiny experiments, learning from them, and stacking the results together into something that works really well: Image
Read 6 tweets
Jun 20, 2025
NEW RESEARCH: Approximating Language Model Training Data from Weights

ever wonder how much information is available in an open-weights model?

DeepSeek R1 weights are 1.2 TB...

what can we learn from all those bits?

our method reverses LLM finetuning to recover data: 🧵Image
Image
to do this, you need TWO sets of model weights: the initial model and a finetune

this is realistic. open-weights models often come with two checkpoints

instead of one-shot generating data from weights, we select data from the web with gradients that point along the model diff Image
our algorithm is a bit complicated, mostly because computing per-example gradients is hard to do at scale

so we make some efficiency improvements:
- computing grads w vmap
- only using last-layer grads (which are still big, in the case of LMs)
- projecting them to a smaller dim Image
Read 9 tweets
May 21, 2025
excited to finally share on arxiv what we've known for a while now:

All Embedding Models Learn The Same Thing

embeddings from different models are SO similar that we can map between them based on structure alone. without *any* paired data

feels like magic, but it's real:🧵
a lot of past research (relative representations, The Platonic Representation Hypothesis, comparison metrics like CCA, SVCCA, ...) has asserted that once they reach a certain scale, different models learn the same thing

this has been shown using various metrics of comparison
we take things a step further. if models E1 and E2 are learning 'similar' representations, what if we were able to actually align them?

and can we do this with just random samples from E1 and E2, by matching their structure?

we take inspiration from 2017 GAN papers that aligned pictures of horses and zebras...Image
Read 8 tweets
Jan 3, 2025
no AI here, just the coolest paper i've seen in a while Image
turns out the way paints mix (blue + red = purple) is much more complicated than how light mixes (blue + red = pink)

they have to use a little bit of nonlinear modeling to capture this, and "add" paints in this nonlinear latent color space Image
here's the link

it's software tooscrtwpns.com/mixbox.pdf
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(