(2/6) EditGAN builds on a GAN framework that jointly models images and their semantic segmentation (nv-tlabs.github.io/datasetGAN/). Manually modifying segmentations is easy. This allows us to find editing vectors in latent space that enable high-precision image editing.
(3/6) EditGAN allows us to learn an arbitrary number of editing vectors, which can be directly applied to other images at interactive rates. We show that EditGAN can manipulate images with an unprecedented level of detail and freedom while preserving full image quality.
(4/6) EditGAN is the first GAN-driven image editing method that simultaneously 1) offers high-precision editing, 2) requires very little annotated training data, 3) runs interactively, 4) can easily compose multiple edits, 5) and works on real, generated, and out-of-domain images
(6/6) At the NVIDIA Toronto AI Lab, we are looking for motivated interns (including exceptional undergraduates in senior years) with strong maths skills and a passion for research to build amazing applications of modern generative models! Interested? Reach out (huling@nvidia.com)
• • •
Missing some Tweet in this thread? You can try to
force a refresh