Tom Goldstein Profile picture
Aug 18, 2022 9 tweets 2 min read Read on X
Why have diffusion models displaced GANs so quickly? Consider the tale of the (very strange) first DALLE model. In 2021, diffusions were almost unheard of, yet the creators of DALLE had already rejected the GAN approach. Here’s why. 🧵
DALLE is an image model, but it was built like a language model. The model trained on image-caption pairs. Captions were encoded as 256 tokens. Images were broken into a 32x32 grid of patches, which were each encoded as a token. All tokens were merged into a single sequence. Image
A transformer-based "language" model was trained on these sequences, ignoring the fact that some tokens represent text and some represent patches. The model reads in a partial sequence of tokens, and predicts the next token in the sequence.
At test time, the model is first given a sequence of text tokens (the caption), and it produces the next token in the sequence (the upper left image patch). Then the existing text+patch tokens are handed back to the model, and it produces the next image patch. Image
So why didn't DALLE just use a GAN? GAN training requires solving a saddle-point/minimax problem, which is quite unstable. Simple tricks can help stabilize this (shameless self-promotion alert: openreview.net/forum?id=Skj8K…), but the problem remains.
In fact, GANs were proposed by Goodfellow in 2014, but it took three years and countless gallons of grad student tears before stable routines for ImageNet appeared in 2017. Training on the very large and diverse DALLE dataset would have been extremely challenging.
Unlike GANs, the language model approach to DALLE required minimization of a standard convex cross-entropy loss, and we know how to make that stable.
Today, the vision community is turning towards diffusion models. Diffusions are built on simple image denoising nets that minimize a convex regression loss (usually least-squares or L1). No minimax. Easy breezy. It was probably an easy choice to build DALLE2 on this approach.
The success of diffusion models is a great example of the impact that a new mathematical paradigm can have. All the hyper-parameter tuning in the world can't beat a few lines of thoughtful math.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tom Goldstein

Tom Goldstein Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tomgoldsteincs

Feb 10
New open source reasoning model!

Huginn-3.5B reasons implicitly in latent space 🧠

Unlike O1 and R1, latent reasoning doesn’t need special chain-of-thought training data, and doesn't produce extra CoT tokens at test time.

We trained on 800B tokens 👇 Image
Huginn was built for reasoning from the ground up, not just fine-tuned on CoT.

We built our reasoning system by putting a recurrent block inside the LLM. On a forward pass, we loop this block a random number of times. By looping it more times, we dial up compute. Image
Recurrence improves reasoning a lot. To show this, we did a comparison with a standard architecture.

We train a standard 3.5B LLM from scratch on 180B tokens. Then we train a recurrent 3.5B model on the same tokens.

The recurrent model does 5X better on GSM8K. Image
Read 7 tweets
Jun 20, 2024
LLMs have low randomness: if you ask the same thing twice you get similar responses. Generator prompts are a way to boost the randomness of LLMs.

Using a few generator prompts, I had Gemini write an entire instruction tuning dataset from scratch. It outperform popular datasets. Image
Let’s start with a toy example of why we need generator prompts. Suppose I want a list of different colors. So I feed this prompt to Gemini 1000 times. This does poorly - I only get 33 unique outputs from 1000 runs. I need more randomness. Image
A generator prompt asks the model to enumerate a long list of execution paths, and then randomizes which paths get chosen.

Here's an example. The numbers 23 and 76 are randomized each time the prompt is called.

This prompt gives me 782 unique outputs from 1000 runs. Image
Read 9 tweets
Oct 12, 2023
🚨 This one simple trick will level up your LLM🚀🚀

Wait...don't go. This isn't a blue check grifter tweet!

Instruction tuning with this easy trick will *actually* boost AlpacaEval scores, even for large (70B) and llama2-chat base models…by a lot 🧵 Image
Ok, here's the trick: during instruction finetuning, we add uniform random noise to the word embeddings.

That's it. Nothing else.

We tried this on a bunch of base models and finetuning datasets. They all showed big gains. Image
Even when the base model is already highly refined (e.g. llama2-chat) or very large (llama2-70B) the benefits of NEFTune are still quite strong. Image
Read 8 tweets
Jul 19, 2023
The Llama2 model is pretty impressive. Human evaluators rank it slightly *better* than ChatGPT on a range of things (excluding code and reasoning).

Here's a short TL;DR on what Meta did to improve the state of the art 🧵 Image
Llama1: Small models (7B & 13B) were trained on 1 trillion tokens. Large models saw 1.4T tokens.

Llama2: All models trained on 2T tokens. This means the small models are "over trained" beyond what the scaling laws recommend, resulting in great performance for small models! Image
As a result of the long training runs, Llama2 beats other major open-source models at most academic benchmarks. Their 7B model is WAY better than other 7B options on all tasks except code. Image
Read 11 tweets
Jul 5, 2023
Nvidia’s AI products follow a weird reverse Moore’s law: every two years, you get half as many FLOPS for your money. This is the opposite of the rest of the chip market 📈

With the H100 release, Nvidia had to reverse course.

A 🧵 on Nvidia losing its grip on the GPU market.
Let’s focus in on the machine learning GPUs. You can see the value drop over time, until the H100 created an uptick. Note: I’m using today’s price for each card, but a similar downward trend also holds for the release prices.
The drop is because of monopoly power and clever market segmentation.
Example: The “server-grade” V100 is a minor variant of the 2080ti gaming card. Nvidia sells it to institutions instead of gamers, charging 5X more for the V100. This means huge profits.
lambdalabs.com/blog/best-gpu-…
Read 11 tweets
Jun 19, 2023
Training an LLM takes about 1 trillion words. That’s about 30,000 years of typing.
But where does this data come from?
And what does this have to do with the Reddit protests?
Here’s how OpenAI trains models on “the entire internet.” 🧵📜
Much of what we know about OpenAI is from urban legends. But the GPT3 paper does have a table showing their data sources. The cliché that LLMs are trained on “the whole internet” comes from the use of CommonCrawl. Image
CommonCrawl (CC) is a non-profit that scrapes the internet with bots and tries to record everything since 2008. 90% of CC is HTML, CSS, and scripts. The usable 10% contains junk that needs to be tossed out to clean the dataset.
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(