Here is a slowed down version so you can catch the movement. Does the smile look impossible or improbable to you?
Did you change your mind?
VIDEO #2
This one is harder. It falls into category 3) of technology artifacts. When deep neural networks generate faces, they can be a bit out-of-focus and make that part blurred...
Look at the mouth and teeth in this slowed down version. Is it significantly more blurred than the rest of the image? Do they go in and out of focus depending on the position of the mouth?
Did you change your mind?
VIDEO #3
This one is easier, but I broke the link to "Original Clip." You need the HD versions as compression on Twitter/YouTube will hideΒ many glitches.
Use a local video player like VLC to slow-down or step frame-by-frame:
πΊ ipfs.io/ipfs/QmfZFPHBCβ¦
Overall in the current state-of-the-art of #DeepFakes in politics, there were not many "technology artefacts" as I was expecting (e.g. seams, borders, popping) and even blurring can be hidden by bad compression.
Sometimes it takes the trained eye of someone in animation!
Reposting this folder with the original clips (incl. slow version) in a different encoding (mp4/x264), in case you had problems viewing them:
I've been digging through various propaganda and conspiracy websites (so you don't have to) and finding a surprisingly large number of deep fake appearances of Mr. BΡden.
Here is a thread with videos+polls to test your skills at discerning what's real or not...
One of my favorite papers recently questions a fundamental building block of machine learning: cross entropy loss. (Surprised it took until 2017 to discover focal loss, and 2020 to apply it to DNN.)
Next in our literature survey in Texture Synthesis, a personal favorite and under-rated paper by Li et Wand. π₯
An illustrated review & tutorial in a thread! π
π Combining Markov Random Fields & Convolutional Neural Networks for Image Synthesis
π arxiv.org/abs/1601.04589#ai
Here's the nomenclature I'm using.
βοΈ Beginner-friendly insight or exercise.
π³οΈ Related work that's relevant here!
π Open research topic of general interest.
π‘ Idea or experiment to explore further...
π³οΈ The paper of Li & Wand is inspired by Gatys' work from 2015. It explores a different way (sometimes better) to use deep convolution networks to generate images...
βοΈ Beginner-friendly insight or exercise.
π³οΈ Related work that's relevant here!
π Open research topic of general interest.
π‘ Insight or idea to experiment further...
The work by Gatys et al. is an implementation of a parametric texture model: you extract "parameters" (somehow) from an image, and those parameters describe the image β ideally such that you can reproduce its texture.
I'll be using these textures (photos) as examples throughout:
It appears that the experiment known as creative.ai will be ending soon, roughly four years after it began. Since I got permission to share, I thought I'd write more about my part in particular and what I could have done differently.
Thread < 1 /π§΅>
2/ I say "experiment" because on some level everything is just an experiment, with lessons to be learned from the outcomes. Experiments never fail!
With hindsight I feel more able to discuss this rationally...
3/ However, when you're in the trenches, everything can feel like a critical moment β with the full weight of potential failure behind them.
If you were to include images in a Tweet that promoted progress specifically in GAN research (relativistic discriminators, wasserstein constraint, unrolled steps, spectral norm, etc.), it'd look like this... Not quite so catchy! π€ [2/8]
It's like a different field of research.
The progress in the latest NVIDIA paper was made 100% with domain-specific changes independent of GANs. The authors say so themselves in the introduction: [3/8]