I've been digging through various propaganda and conspiracy websites (so you don't have to) and finding a surprisingly large number of deep fake appearances of Mr. Bіden.
Here is a thread with videos+polls to test your skills at discerning what's real or not...
One of my favorite papers recently questions a fundamental building block of machine learning: cross entropy loss. (Surprised it took until 2017 to discover focal loss, and 2020 to apply it to DNN.)
Next in our literature survey in Texture Synthesis, a personal favorite and under-rated paper by Li et Wand. 💥
An illustrated review & tutorial in a thread! 👇
📝 Combining Markov Random Fields & Convolutional Neural Networks for Image Synthesis
🔗 arxiv.org/abs/1601.04589#ai
Here's the nomenclature I'm using.
✏️ Beginner-friendly insight or exercise.
🕳️ Related work that's relevant here!
📖 Open research topic of general interest.
💡 Idea or experiment to explore further...
🕳️ The paper of Li & Wand is inspired by Gatys' work from 2015. It explores a different way (sometimes better) to use deep convolution networks to generate images...
✏️ Beginner-friendly insight or exercise.
🕳️ Related work that's relevant here!
📖 Open research topic of general interest.
💡 Insight or idea to experiment further...
The work by Gatys et al. is an implementation of a parametric texture model: you extract "parameters" (somehow) from an image, and those parameters describe the image — ideally such that you can reproduce its texture.
I'll be using these textures (photos) as examples throughout:
It appears that the experiment known as creative.ai will be ending soon, roughly four years after it began. Since I got permission to share, I thought I'd write more about my part in particular and what I could have done differently.
Thread < 1 /🧵>
2/ I say "experiment" because on some level everything is just an experiment, with lessons to be learned from the outcomes. Experiments never fail!
With hindsight I feel more able to discuss this rationally...
3/ However, when you're in the trenches, everything can feel like a critical moment — with the full weight of potential failure behind them.
If you were to include images in a Tweet that promoted progress specifically in GAN research (relativistic discriminators, wasserstein constraint, unrolled steps, spectral norm, etc.), it'd look like this... Not quite so catchy! 🤓 [2/8]
It's like a different field of research.
The progress in the latest NVIDIA paper was made 100% with domain-specific changes independent of GANs. The authors say so themselves in the introduction: [3/8]