The researchers have introduced 3 key elements that allow the system to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time novel-view synthesis at 1080p resolution π€―
They tested the algorithm on a total of 13 real scenes taken from previously published datasets and the synthetic Blender dataset.
In the paper, they display visual comparisons to demonstrate the significant leap in quality.
The result they've achieved has almost the same resolution and quality as the real environment. Truly impressive!
In short, a massive piece of news! This represents a breakthrough in the realm of Neural Radiance Fields. Can't wait for this to be fully integrated in our mobile phones! I WANT TO PLAY WITH THIS TECH.
If you liked this and would like me to continue writing similar threads, an RT on the first tweet of the thread will encourage me to keep doing so. Thanks! ππ
Many people don't realize they can retrieve the SEED NUMBER for each image generated on Midjourney + everything that can be done with it. Trust me, it's a game-changer for diving into AI explorations.
Let's go! π§΅ππΌ
#midjourney
π¨ Beware! This is an ADVANCED Midjourney tutorial. Only for experienced argonauts. I don't know of any other MJ-focused creator who has covered this in such depth as you'll see in this tutorial. If you find one, send me the link, I'd love to learn even more!
First things first: what are seed numbers?
Diffusion models (like MJ) use a random seed number for each generation to create a field of visual noise (like TV static) as a starting point to generate the initial image grids.
π΄ PERFUSION: a generative AI model from NVIDIA that fits on a floppy disk πΎ
It takes up just 100KB. Yes, you heard it right, much less than any picture you take with your mobile phone! Why is this revolutionary and can change everything?
I'll tell you π§΅π
Perfusion is a really lightweight "text-to-image" model (100KB) that also trains in just 4 minutes.