Profile picture
Ian Goodfellow @goodfellow_ian
, 12 tweets, 2 min read Read on Twitter
2nd thread on evaluating GAN papers (1st thread hit max thread length)
Many DL algorithms, but especially GANs and RL, get very different results each time you run them. Papers should show at least 3 runs with the same hyperparameters to get some idea of the stochasticity.
A lot of papers that look like they show an improvement are just cherry-picking good runs of the new method and bad runs of the old method.
Even papers without evidence of cherry-picking often show a single learning curve of the new method and a single learning curve of the baseline and the two curves are so close together that I'm confident two runs of the same method would be more widely separated.
When explaining how hyperparameters were optimized, it's important to be clear about if they were chosen to optimize for the max, min, or mean performance over multiple runs.
Another thing to keep in mind is that it's possible to write a bad paper about a good method. Sometimes we see a paper with a new method that works well, but also a lot of unsupported scientific claims. Reviewers should try to reign in the latter.
If you're an area chair, I highly suggest micro-targeting reviewer-paper matches. I don't think there is such a thing as a generic GAN expert. For example, if you get a paper about GANs with encoders, try to get an author of ALI / BiGAN / alpha-GAN / AVB.
Even I would be a frustratingly ignorant reviewer of a lot of GAN sub-topics.
If you review a paper about mode collapse and the authors think mode collapse means memorizing a subset of the training examples, be suspicious. Mode collapse is usually much weirder.
For example, mode collapse is often to weird garbage points that don't resemble data. These points tend to move around during training.
Mode collapse can also be very subtle repetition of textures or backgrounds in images that otherwise might look diverse to the human eye.
That's about it for today's brain-dump. Feel free to reply with your own advice for GAN paper reviewers.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Ian Goodfellow
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!