Excited to share our #ICML2020 paper on fair generative modeling! We present a scalable approach for mitigating dataset bias in models learned on various datasets without explicit annotations. 👇
If we naively mix all datasets, a trained model will propagate or amplify the bias in this mixture. On the other hand, labeling all attributes of interest may be impossible or super expensive. (2/7)