Davis Blalock Profile picture
Aug 27, 2022 15 tweets 6 min read Read on X
"Understanding Scaling Laws for Recommendation Models"

For two years, the AI world has had this glorious period of believing that big tech companies just need more compute to make their models better, not more user data.

That period is ending. Here's what happened: [1/14]
In 2020, OpenAI published a paper (arxiv.org/abs/2001.08361) assessing the relative effects of scaling up models vs datasets. They found that scaling up models had *way* higher returns. [2/14]
The party was on. We got libraries like DeepSpeed (github.com/microsoft/Deep…) that let you train huge models across countless GPUs. We got trillion-parameter… [3/14]
…Mixture of Experts models (arxiv.org/abs/2101.03961). We talked about the "infinite data regime" because we weren't even bothering to use all the data.

Parameter counts were the headline and sample counts were buried in the results section. [4/14]
Fast-forward to March 2022. DeepMind releases the Chinchilla paper (arxiv.org/abs/2203.15556), which shows that a subtle issue with the OpenAI paper caused it to vastly underestimate the importance of dataset size. [5/14]
With smaller models and more data, the Chinchilla authors got much better results for a fixed compute budget. [6/14]
Moreover, as one well-known commentary pointed out (lesswrong.com/posts/6Fpvch8R…), the Chinchilla scaling formula suggests that there's a *hard limit* for model accuracy that no amount of model size will ever overcome without more data. [7/14]
But all of the above work focused on language models.

The real moneymakers at big tech companies are recommender systems. These are what populate your feeds, choose what ads you see, etc.

Maybe language models need more data, but recommender systems don't? [8/14]
This brings us to the current paper, which studies scaling of recommender models.

Put simply: “We show that parameter scaling is out of steam...and until a higher-performing model architecture emerges, data scaling is the path forward.” [9/14]
In more detail, they conduct a thorough study of click-through rate prediction models, the workhorse of targeted ads.

They really seem to have tried to get model scaling to work, more so than any similar paper I've seen. [10/14]
E.g., they divide models into four components and dig deep into how to scale up each one as a function of model and dataset size.

But even the best-chosen model scaling isn't as good as data scaling. [11/14]
Also, similar to language model work, they find clear power laws. These mean that you need a *multiplicative* increase in data and compute to eliminate a fixed fraction of the errors.

I.e, the need for data + compute is nearly insatiable. [12/14]
I wish they *hadn't* found that recommender models need way more data—especially since recommender data is all about tracking what users see and click on.

But if that's the reality, I'm glad this information is at least shared openly via a well-executed paper. [13/14]
Speaking of which, here's the paper: bit.ly/3ARAjKI

And here's my more detailed synopsis: (dblalock.substack.com/i/69736655/und…) [14/14]
If you like this paper, consider RTing this (or another!) thread to publicize the authors' work, or following the authors: @newsha_a @CarolejeanWu @b_bhushanam.

For more threads like this, follow me or @MosaicML

As always, comments + corrections welcome!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Davis Blalock

Davis Blalock Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @davisblalock

Mar 1
I've never seen claims of full bf16 parity with <2bit weights before, so there's reason to be cautiously optimistic here.

But since people seem to have the "optimistic" down, let me add some caution:

1) Despite the title, this paper does not use 1-bit weights. Instead, [1/n]
...it uses ternary quantization, requiring each weight to be one of {-𝛼, 0, 𝛼} for some tensor-specific 𝛼.

This takes *2* bits if stored naively, ~1.58 with perfect entropy coding, and 1.6 in the likely case that you pack 5 values in 8 bits (3^5 = 243 <= 255). [2/n]
2) The paper doesn't compare to any other ternary quantization method, so it's unclear how well this particular scheme works.

There have been countless binary and ternary quantization papers over the past decade, so this would almost certainly not get through peer review. [3/n]
Read 10 tweets
Apr 29, 2023
I've written about 500+ machine learning papers in the past year. Here are some of my most popular threads: [1/n]
Read 9 tweets
Apr 23, 2023
"FP8 versus INT8 for efficient deep learning inference"

Is fp8 just plain better than int8?

No. There are tradeoffs between the two at various levels of the stack, and this paper digs into their strengths and weaknesses. [1/11] Image
First, for a fixed number of bits, floating point addition takes more transistors. [2/11] Image
The same is true of multipliers. [3/11] Image
Read 11 tweets
Apr 22, 2023
"UniverSeg: Universal Medical Image Segmentation"

What if we could train a single neural net to highlight important structures in any medical image given just a few examples? [1/13] Image
They make this happen by assembling a huge dataset, designing an appropriate model, and using a particular training setup.

First, they aggregate a ton of medical imaging datasets into a large corpus called MegaMedical. [2/13] Image
Second, they design a modified U-Net whose blocks jointly look at the “query” image and a few reference images. Think of these reference images as in-context examples, and the combination of the input image and these examples as a visual "prompt". [3/13] Image
Read 13 tweets
Apr 2, 2023
"The effectiveness of MAE pre-pretraining for billion-scale pretraining"

Before you pretrain your vision model on image-caption pairs, you should pre-pretrain it with a masked autoencoding objective. This improves downstream accuracy across a variety of tasks. [1/9]
The first result here is that, as you’d hope, this approach works better when you use larger models and datasets. In particular, using a 3 billion sample Instagram {image, hashtag} dataset works better than just ImageNet-1k. [2/9]
It’s not that surprising that throwing more compute at the training process gets you better results.

The key question is whether you’re better off spending some of that compute on a separate masked pre-pretraining phase instead of just pretraining more. [3/9]
Read 9 tweets
Apr 1, 2023
Imagine a world where keyboards only let you type sentences that the keyboard manufacturer agrees with.

Or where spellcheck and autocorrect work if you're arguing for one side of a debate, but not the other.

That's the world we're building with AI services like ChatGPT. [1/11]
The examples above aren't cherrypicked. Choose any controversy and there's a good chance ChatGPT will only help you support one side.

But is this inevitable? What's really going on here? [2/11]
A simple explanation is "OpenAI is woke so ChatGPT is woke."

There's certainly truth to the claim that the model reflects OpenAI's values, insofar as they're the ones who control it.

But I claim that something subtler is happening. [3/11]

nationalreview.com/corner/chatgpt…
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(