Rosanne Liu Profile picture
Jan 24 7 tweets 6 min read
Now that we can write Tiny Papers @iclr_conf, what should we write about?

I'd like to invite all established researchers to contribute Tiny Ideas as inspirations, seeds for discussions & future collaborations! #TinyIdeasForTinyPapers

I'll start. Note: bad ideas == good starts.
1. Calibrate-before-train: before training every model with *data*, train them with noise to calibrate: loss function is to make sure they output "chance probability" — calibrate a model to be as neutral as possible before training starts. Does it help? Why or why not?
2. Does distillation need real data? Can we train student models with *any* data or even noise inputs, just to mimic teacher's behavior? How far does that get us? Is the scaling curve much worse than using real data?
3. Off-manifold-Adv-attack: generative ima models these days seem to be really good at *staying on the image manifold*, eg, whatever you prompt it with, even nonsensical strings, it generates good images. What attacks can get them off the image manifold to generate noises?
4. Positional encoding: I don't have a good idea here, but I just always thought positional encoding as is is quite inelegant😛 the fact that you have to input the index of tokens in addition to an already ordered list seems so unnecessary and ugly. Someone get rid of it please!
5. Single epoch training: there's no real reason a model needs to see any data point twice. Recent large models have turned to single-epoch training but not because we fixed that bug, only because they have too much data. What would it be like to finally fix that bug?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rosanne Liu

Rosanne Liu Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @savvyRL

Jun 25, 2022
A quick thread on "How DALL-E 2, Imagen and Parti Architectures Differ" with breakdown into comparable modules, annotated with size 🧵
#dalle2 #imagen #parti

* figures taken from corresponding papers with slight modification
* parts used for training only are greyed out A compilation of model architecture diagrams of three recent
By now we know that
- DALL-E & Imagen = diffusion; Parti = autoregressive
- Imagen & Parti use generic text encoders; DALLE uses CLIP enc

But in fact, one version of Imagen also used CLIP, one version of DALL-E also had AR prior. So there are more connections than it seemed.
If we break each architecture down into *modules*, the similarity/comparability is even more clear.

First of all, they all have a "text encoder", but differ in types and sizes:
- DALL-E uses CLIP text encoder
- Imagen uses T5-XXL
- Parti uses a generic transformer
Read 6 tweets
Dec 13, 2020
Favorite #NeurIPS2020 presentations and posters this year

PS: heavily biased by what I happened to catch and whom I happened to talk to
PPS: still catching up on talks so the list is rather incomplete and I'd hope to grow
PPPS: with contributions from @ml_collective members
[Talk] No. 1 has to go to -- keynote talk by @isbellHFh @mlittmancs et al simply brilliant 🎉🎉
slideslive.com/38935825/you-c…
Read 20 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(