My Authors
Read all threads
Excited to share our #ICML2020 paper on fair generative modeling! We present a scalable approach for mitigating dataset bias in models learned on various datasets without explicit annotations. 👇

w/ @adityagrover_ @_smileyball Trisha Singh @StefanoErmon
arxiv.org/abs/1910.12008
Generative models can be trained on large, unlabeled data sources.

If we naively mix all datasets, a trained model will propagate or amplify the bias in this mixture. On the other hand, labeling all attributes of interest may be impossible or super expensive. (2/7)
We use one dataset as a reference (from external prior knowledge) and let all other datasets be biased w.r.t. this reference. Our idea is to construct an *importance weighted* dataset for learning. Here, weights are the density ratio between the biased and reference distributions
To estimate this ratio, we train a binary classifier to distinguish examples from the reference and biased datasets.

The classifier’s odds ratio is a consistent estimator for the density ratio. This requires *no labels* and makes *no assumption* about the bias source. (4/7)
Empirically, the importance weighting method works well even when the size of the reference dataset is small compared to the biased dataset. There’s little compromise in resulting sample quality as measured by FID. (5/7)
There’s still a long way to go for generative modeling that can be reliably deployed in the real world!

We hope our work is a first step towards solutions that take a more holistic view of bias and fairness in generative modeling, esp. in light of everything going on lately.
Come virtually say hi at the conference during our poster sessions on Tuesday, July 14th: (1) 7:00-7:45am PST and (2) 8:00-8:45pm PST. (7/7)
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Kristy Choi

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!