It proposes architectural changes that suppress aliasing and forces the model to implement more natural hierarchical refinement which improves its ability to generate video and animation.
In the cinemagraph below, we can see that in StyleGAN2 the texture (e.g., wrinkles and hairs) appears to stick to the screen coordinates. In comparison, StyleGAN3 (right) transforms details coherently:
2/8
The following example shows the same issue with StyleGAN2: textural details appear fixed. As for alias-free StyleGAN3, smooth transformations with the rest of the screen can be seen.
3/8
In the interpolation example below, it appears that StyleGAN3 even learns to mimic camera motion:
4/8
Results show improvements on FFHQ-U when applying the proposed ideas by converting the StyleGAN2 generator to be fully equivariant to translation and rotation. Configs T and R correspond to the alias-free generator. Discriminator remains unchanged.
The following are results for six datasets using StyleGAN2 and the proposed alias-free generators (configs T and R).
6/8
The animation below demonstrates the internal representations of both StyleGAN2 and StyleGAN3. It appears that StyleGAN2 builds the image in a different manner: "multi-scale phase signals that follow the features seen in the final image":
In a new paper from @wightmanr et al. a traditional ResNet-50 is re-trained using a modern training protocol. It achieves a very competitive 80.4% top-1 accuracy on ImageNet without using extra data or distillation.
The paper catalogues the exact training settings to provide a robust baseline for future experiments:
It also records training costs and inference times on ImageNet classification between other architectures trained with the proposed ResNet-50 optimized training procedure:
🚨 Newsletter Issue #3. Featuring a new state-of-the-art on ImageNet, a trillion-parameter language model, 10 applications of transformers you didn’t know about, and much more! Read on below:
⏪ Papers with Code: Year in Review. We’re ending the year by taking a look back at the top trending papers, libraries and benchmarks for 2020. Read on below!