#DeepFake Tutorial.

In this thread I'm going to post some tips & tricks to identify Deep Fakes using these examples I found online.

πŸ“ŠVote in this other thread first:
There are many ways to detect deep fakes. Here are three of them:
1) Impossible static poses.
2) Impossible movements.
3) Technology artefacts.

The first category seems to be easier to detect, but the second is more reliable. The third may go away soon!
VIDEO #1

This one seems easy enough...

TIP: Use the speed controls in your favorite video player to slow it down while it's playing.
Here is a slowed down version so you can catch the movement. Does the smile look impossible or improbable to you?
Did you change your mind?
VIDEO #2

This one is harder. It falls into category 3) of technology artifacts. When deep neural networks generate faces, they can be a bit out-of-focus and make that part blurred...

Look at the mouth and teeth in this slowed down version. Is it significantly more blurred than the rest of the image? Do they go in and out of focus depending on the position of the mouth?
Did you change your mind?
VIDEO #3

This one is easier, but I broke the link to "Original Clip." You need the HD versions as compression on Twitter/YouTube will hideΒ many glitches.

Use a local video player like VLC to slow-down or step frame-by-frame:
πŸ“Ί ipfs.io/ipfs/QmfZFPHBC…

In the slowed down version, you can decide better if the movement looks physically correct or not...
Did you change your mind?
VIDEO #4

This one is by far the hardest. Current votes are about tied, so it's not clearly either ;-)

What I'm looking for is signs of "interpolation" (a form of blending, known as "lerping" in animation).
Do the parts of the face slide into position do colors blend?

Is one cheek more blurred than the other? Are there glitches or popping in that area?
Did you change your opinion?
Fixed link to #4:
πŸ“Ί ipfs.io/ipfs/QmVSnUuB4…

VIDEO #5

This one is my favorite. I'd use category 1) or 2) to determine if this is real or not...
Here's a slowed down version so you can decide. It's the smile and the blending just before that will help you decide.

πŸ“Ί ipfs.io/ipfs/QmbZUdVYc…
Did you change your opinion?
Overall in the current state-of-the-art of #DeepFakes in politics, there were not many "technology artefacts" as I was expecting (e.g. seams, borders, popping) and even blurring can be hidden by bad compression.

Sometimes it takes the trained eye of someone in animation!
Reposting this folder with the original clips (incl. slow version) in a different encoding (mp4/x264), in case you had problems viewing them:

πŸ“ ipfs.io/ipfs/Qma8TD3kW…

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Alex J. Champandard

Alex J. Champandard Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @alexjc

24 Nov
#DeepFake Alert!

I've been digging through various propaganda and conspiracy websites (so you don't have to) and finding a surprisingly large number of deep fake appearances of Mr. BΡ–den.

Here is a thread with videos+polls to test your skills at discerning what's real or not...
VIDEO #1

Original clip at higher quality (mp4/vp9):
πŸ“Ί ipfs.io/ipfs/Qmb9ekFmw…
Is this real BΡ–den or fake BΡ–den?
Read 19 tweets
23 Nov
One of my favorite papers recently questions a fundamental building block of machine learning: cross entropy loss. (Surprised it took until 2017 to discover focal loss, and 2020 to apply it to DNN.)

πŸ”— arxiv.org/abs/2002.09437
πŸ“ Calibrating Deep Neural Networks using Focal Loss
No matter how well you understand something mathematically, you still might be missing something that your current models just don't show you...

Intuitive understanding comes first, the math follows!
The code is #PyTorch (of course :-) and available here:
github.com/torrvision/foc…

If you drop this into your codebase and see improvements, let me know!
Read 4 tweets
16 Jun
Next in our literature survey in Texture Synthesis, a personal favorite and under-rated paper by Li et Wand. πŸ’₯

An illustrated review & tutorial in a thread! πŸ‘‡

πŸ“ Combining Markov Random Fields & Convolutional Neural Networks for Image Synthesis
πŸ”— arxiv.org/abs/1601.04589 #ai
Here's the nomenclature I'm using.

✏️ Beginner-friendly insight or exercise.
πŸ•³οΈ Related work that's relevant here!
πŸ“– Open research topic of general interest.
πŸ’‘ Idea or experiment to explore further...

See this thread for context and other reviews:
πŸ•³οΈ The paper of Li & Wand is inspired by Gatys' work from 2015. It explores a different way (sometimes better) to use deep convolution networks to generate images...
Read 19 tweets
16 May
Let's start our tour of research papers where #generative meets deep learning with this classic by Gatys, Ecker and Bethge from 2015.✨

A multimedia tutorial & review in a thread! πŸ‘‡

πŸ“ Texture Synthesis Using Convolutional Neural Networks
πŸ”— arxiv.org/abs/1505.07376 #ai
Here's the nomenclature I'll be using.

✏️ Beginner-friendly insight or exercise.
πŸ•³οΈ Related work that's relevant here!
πŸ“– Open research topic of general interest.
πŸ’‘ Insight or idea to experiment further...

See this thread for context and other reviews:
The work by Gatys et al. is an implementation of a parametric texture model: you extract "parameters" (somehow) from an image, and those parameters describe the image β€” ideally such that you can reproduce its texture.

I'll be using these textures (photos) as examples throughout:
Read 24 tweets
4 May
It appears that the experiment known as creative.ai will be ending soon, roughly four years after it began. Since I got permission to share, I thought I'd write more about my part in particular and what I could have done differently.

Thread < 1 /🧡>
2/ I say "experiment" because on some level everything is just an experiment, with lessons to be learned from the outcomes. Experiments never fail!

With hindsight I feel more able to discuss this rationally...
3/ However, when you're in the trenches, everything can feel like a critical moment β€” with the full weight of potential failure behind them.

π‘π‘œπ‘π‘œπ‘‘π‘¦ π‘ π‘Žπ‘–π‘‘ 𝑖𝑑 π‘€π‘Žπ‘  π‘’π‘Žπ‘ π‘¦, 𝑏𝑒𝑑 π‘›π‘œ π‘œπ‘›π‘’ π‘’π‘£π‘’π‘Ÿ π‘ π‘Žπ‘–π‘‘ 𝑖𝑑 π‘€π‘œπ‘’π‘™π‘‘ 𝑏𝑒 π‘‘β„Žπ‘–π‘  β„Žπ‘Žπ‘Ÿπ‘‘.
Read 21 tweets
17 Jan 19
Impressive! There has been significant progress in GANs over the past few years, but that's not really what we're seeing here... [1/8]

(Thread. Buckle up! πŸ‘‡πŸ”₯)
If you were to include images in a Tweet that promoted progress specifically in GAN research (relativistic discriminators, wasserstein constraint, unrolled steps, spectral norm, etc.), it'd look like this... Not quite so catchy! πŸ€“ [2/8]
It's like a different field of research.

The progress in the latest NVIDIA paper was made 100% with domain-specific changes independent of GANs. The authors say so themselves in the introduction: [3/8]
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!