curious about the training data of OpenAI's new gpt-oss models? i was too.
so i generated 10M examples from gpt-oss-20b, ran some analysis, and the results were... pretty bizarre
time for a deep dive 🧵
here's a map of the embedded generations
the model loves math and code. i prompt with nothing and yet it always reasons. it just talks about math and code, and mostly in English
math – probability, ML, PDEs, topology, diffeq
code – agentic software, competitive programming, data science
first thing to notice is that practically none of the generations resemble natural webtext. but surprisingly none of them look like normal chatbot interactions either
this thing is clearly trained via RL to think and solve tasks for specific reasoning benchmarks. nothing else.
and it truly is a tortured model. here the model hallucinates a programming problem about dominos and attempts to solve it, spending over 30,000 tokens in the process
completely unprompted, the model generated and tried to solve this domino problem over 5,000 separate times
ran a classifier over outputs to get a sense of which programming languages gpt-oss knows
they seem to have trained on nearly everything you've ever heard of. especially a lot of Perl
(btw, from my analysis Java and Kotlin should be way higher. classifier may have gone wrong)
what you can't see from the map is many of the chains start in English but slowly descend into Neuralese
the reasoning chains happily alternate between Arabic, Russian, Thai, Korean, Chinese, and Ukrainian. then usually make their way back to English (but not always)
the OCR conjecture:
some examples include artifacts such as OCRV ROOT, which indicate the training data may have been
reading between the lines: OpenAI is scanning books
(for some reason the model loves mentioning how many deaf people live in Malaysia)
what are some explanations for constant codeswitching?
1. OpenAI has figured out RL. the models no longer speak english 2. data corruption issues via OCR or synthetic training 3. somehow i forced the model to output too many tokens and they gradually shift out of distribution
there are a small number of creative outputs interspersed throughout
here's one example where the model starts writing a sketch for a norwegian screenplay 🤷♂️
i also learned a lot from this one.
the model is *really* good at using unicode
...but might be bad at physics. what in the world is a 'superhalo function'
if you want to try the data, here you go, it's on huggingface:
even though i varied the random seed and used temperature, a lot of the outputs are highly redundant
it would be prudent to deduplicate, i bet there are only 100k or fewer mostly-unique examples here
FUTURE WORK – describing differences
@ZhongRuiqi has some incredible work on methods for describing the difference between two text distributions *in natural language*
we could compare outputs of 20b to the 120b model, or LLAMA, or GPT-5...
FUTURE WORK – direct extraction
we're working on directly extracting training data from models using RL and other methods. we'll be presenting our first work on this in COLM, and expect more in this space
we may be able to directly extract data from the 120b model.. one day 😎
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Eventually BERT gave rise to RoBERTa. Then, DeBERTa. Later, ModernBERT.
And now, NeoBERT. The new state-of-the-art small-sized encoder:
the key insight, i think, is using an optimal depth-to-width ratio for the transformer architecture. and training on good data. a lot of good data.
even though NeoBERT has slightly more parameters, it's still faster AND more effective than ModernBERT for long sequences:
like many important advancements in deep learning, NeoBERT arose from running lots of tiny experiments, learning from them, and stacking the results together into something that works really well:
NEW RESEARCH: Approximating Language Model Training Data from Weights
ever wonder how much information is available in an open-weights model?
DeepSeek R1 weights are 1.2 TB...
what can we learn from all those bits?
our method reverses LLM finetuning to recover data: 🧵
to do this, you need TWO sets of model weights: the initial model and a finetune
this is realistic. open-weights models often come with two checkpoints
instead of one-shot generating data from weights, we select data from the web with gradients that point along the model diff
our algorithm is a bit complicated, mostly because computing per-example gradients is hard to do at scale
so we make some efficiency improvements:
- computing grads w vmap
- only using last-layer grads (which are still big, in the case of LMs)
- projecting them to a smaller dim
**GPT-style language models memorize 3.6 bits per param**
we compute capacity by measuring total bits memorized, using some theory from Shannon (1953)
shockingly, the memorization-datasize curves look like this:
___________
/
/
(🧵)
this all started from a quest to come up with a proper measurement of model memorization
it's hard to compute *per-example* memorization, because models "share" info between datapoints
so we start with random uniform strings, where sharing isn't possible. and we get this:
we then compute the capacity of different models
(GPT models with varying numbers of layers and hidden dimensions)
averaged over hundreds of models in fp32, we get the following curve, indicating a linear trend of around 3.6 bits-per-parameter, regardless of the exact details:
a lot of past research (relative representations, The Platonic Representation Hypothesis, comparison metrics like CCA, SVCCA, ...) has asserted that once they reach a certain scale, different models learn the same thing
this has been shown using various metrics of comparison
we take things a step further. if models E1 and E2 are learning 'similar' representations, what if we were able to actually align them?
and can we do this with just random samples from E1 and E2, by matching their structure?
we take inspiration from 2017 GAN papers that aligned pictures of horses and zebras...
We spent a year developing cde-small-v1, the best BERT-sized text embedding model in the world.
today, we're releasing the model on HuggingFace, along with the paper on ArXiv.
I think our release marks a paradigm shift for text retrieval. let me tell you why👇
Typical text embedding models have two main problems 1. training them is complicated and requires many tricks: giant batches, distillation, hard negatives... 2. the embeddings don't "know" what corpus they will be used in; consequently, all text spans are encoded the same way
To fix (1) we develop a new training technique: contextual batching. all batches share a lot of context – one batch might be about horse races in Kentucky, the next batch about differential equations, etc.
this lets us get better performance without big batches or hard negative mining. there's also some cool theory behind it