Now that we can write Tiny Papers @iclr_conf, what should we write about?
I'd like to invite all established researchers to contribute Tiny Ideas as inspirations, seeds for discussions & future collaborations! #TinyIdeasForTinyPapers
I'll start. Note: bad ideas == good starts.
1. Calibrate-before-train: before training every model with *data*, train them with noise to calibrate: loss function is to make sure they output "chance probability" — calibrate a model to be as neutral as possible before training starts. Does it help? Why or why not?
2. Does distillation need real data? Can we train student models with *any* data or even noise inputs, just to mimic teacher's behavior? How far does that get us? Is the scaling curve much worse than using real data?
3. Off-manifold-Adv-attack: generative ima models these days seem to be really good at *staying on the image manifold*, eg, whatever you prompt it with, even nonsensical strings, it generates good images. What attacks can get them off the image manifold to generate noises?
4. Positional encoding: I don't have a good idea here, but I just always thought positional encoding as is is quite inelegant😛 the fact that you have to input the index of tokens in addition to an already ordered list seems so unnecessary and ugly. Someone get rid of it please!
5. Single epoch training: there's no real reason a model needs to see any data point twice. Recent large models have turned to single-epoch training but not because we fixed that bug, only because they have too much data. What would it be like to finally fix that bug?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
A quick thread on "How DALL-E 2, Imagen and Parti Architectures Differ" with breakdown into comparable modules, annotated with size 🧵 #dalle2#imagen#parti
* figures taken from corresponding papers with slight modification
* parts used for training only are greyed out
By now we know that
- DALL-E & Imagen = diffusion; Parti = autoregressive
- Imagen & Parti use generic text encoders; DALLE uses CLIP enc
But in fact, one version of Imagen also used CLIP, one version of DALL-E also had AR prior. So there are more connections than it seemed.
If we break each architecture down into *modules*, the similarity/comparability is even more clear.
First of all, they all have a "text encoder", but differ in types and sizes:
- DALL-E uses CLIP text encoder
- Imagen uses T5-XXL
- Parti uses a generic transformer
Favorite #NeurIPS2020 presentations and posters this year
PS: heavily biased by what I happened to catch and whom I happened to talk to
PPS: still catching up on talks so the list is rather incomplete and I'd hope to grow
PPPS: with contributions from @ml_collective members