Papers with Code Profile picture
Oct 12, 2021 8 tweets 4 min read Read on X
StyleGAN3 is out and results are 🤯!

It proposes architectural changes that suppress aliasing and forces the model to implement more natural hierarchical refinement which improves its ability to generate video and animation.

paperswithcode.com/paper/alias-fr…

1/8
In the cinemagraph below, we can see that in StyleGAN2 the texture (e.g., wrinkles and hairs) appears to stick to the screen coordinates. In comparison, StyleGAN3 (right) transforms details coherently:

2/8
The following example shows the same issue with StyleGAN2: textural details appear fixed. As for alias-free StyleGAN3, smooth transformations with the rest of the screen can be seen.

3/8
In the interpolation example below, it appears that StyleGAN3 even learns to mimic camera motion:

4/8
Results show improvements on FFHQ-U when applying the proposed ideas by converting the StyleGAN2 generator to be fully equivariant to translation and rotation. Configs T and R correspond to the alias-free generator. Discriminator remains unchanged.

paperswithcode.com/sota/image-gen…

5/8
The following are results for six datasets using StyleGAN2 and the proposed alias-free generators (configs T and R).

6/8
The animation below demonstrates the internal representations of both StyleGAN2 and StyleGAN3. It appears that StyleGAN2 builds the image in a different manner: "multi-scale phase signals that follow the features seen in the final image":

7/8
Useful links:

Paper & Result: paperswithcode.com/paper/alias-fr…
Code: github.com/NVlabs/stylega…
Project website: nvlabs.github.io/stylegan3/

8/8

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Papers with Code

Papers with Code Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @paperswithcode

Nov 15, 2022
🪐 Introducing Galactica. A large language model for science.

Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.

Explore and get weights: galactica.org
We believe models should be open.

To accelerate science, we open source all models including the 120 billion model with no friction. You can access them here.

github.com/paperswithcode…
We release our initial paper below. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. Includes scientific text and also scientific modalities such as proteins, compounds and more.

galactica.org/paper.pdf
Read 7 tweets
Aug 31, 2022
🔥Top Trending ML Papers of the Month

Here is a thread to catchup on the top 10 trending papers of August on @paperswithcode. Image
1) An Image is Worth One Word - a new approach that allows for more creative freedom with image generation; proposes "textual inversions" to find pseudo-words that compose new sentences that guide personalized creations.

paperswithcode.com/paper/an-image…
2) Cold Diffusion - proposes diffusion models built around arbitrary image transformations without Gaussian noise; discusses the potential for generalized diffusion models that invert arbitrary processes.

paperswithcode.com/paper/cold-dif… Image
Read 12 tweets
Jul 19, 2022
Keeping up with Language Models

Check out these trending papers to catchup on the latest developments in language models. ↓
1) N-Grammer (Roy et al.) - takes inspiration from statistical language modeling and augments Transformers with latent n-grams; it matches strong baseline models like Transformer and Primer while being faster in inference.

paperswithcode.com/paper/n-gramme…
2) Language Models (Mostly) Know What They Know (Kadavath et al.) - investigates whether an LM can be trained to perform well at predicting which questions it will be able to answer correctly; this enables self-evaluation on open-ended sampling tasks.

paperswithcode.com/paper/language…
Read 8 tweets
Jul 5, 2022
🔥Top Trending ML Papers of the Month

Here is a thread to catchup on the top 10 trending papers of June on @paperswithcode. ↓
1️⃣ Mask DINO (Li et al) - extends DINO (DETR with Improved Denoising Anchor Boxes) with a mask prediction branch to support image segmentations tasks (instance, panoptic, and semantic).

paperswithcode.com/paper/mask-din…
2️⃣ Hopular (Schäfl et al) - proposes a deep learning architecture based on continuous Hopfield networks for competitive results on small-sized tabular datasets.

paperswithcode.com/paper/hopular-…
Read 12 tweets
May 31, 2022
🔥Top Trending ML Papers of the Month

Here is a thread to catchup on the top 10 trending papers of May on @paperswithcode. 1/11
1⃣ OPT (Zhang et al) - release open pre-trained transformer language models ranging from 125M to 175B parameters. The release include: logbook detailing infrastructure challenges and code to experiment with the released models. 2/11

paperswithcode.com/paper/opt-open…
2⃣ CoCa (Yu et al) - a new foundation model that achieves new state-of-the-art on ImageNet (90.6%); proposes minimal strategy to jointly pre-train an image-text encoder decoder with contrastive loss and captioning loss. 3/11

paperswithcode.com/paper/coca-con…
Read 12 tweets
Apr 25, 2022
10 Recent Trends in Language Models

In this thread, we summarize ten recent trends and insights in language models. ↓
1) Scaling Laws

Kaplan et al. report that language models (LMs) performance improves smoothly when increasing model size, dataset size, and compute. Recent works provide empirical evidence that LMs are underexplored and can be improved in other ways.

paperswithcode.com/paper/scaling-…
2) Compute-Optimal Models

Hoffmann et al find that large LMs are undertrained and that for a compute-optimal model, Chinchilla, model size & number of training tokens should be scaled equally. Chinchilla (70B) outperforms Gopher (280B) on several tasks.

paperswithcode.com/paper/training…
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(