I also used #img2img to instantly generate dozens of variants, "inspired“ by a single original photograph.
This provides consistent assets (similar shape, size, or materials) with some slight variations. It's up to the artist/user to select which one looks best.
Same thing here, using #img2img - however, I prompted "steel chest" instead of "wooden chest"
While it's not 100% perfect, there's still more steel in these chests than in the previous ones.
Also, some of the assets are disjoint or show anomalies. Some fine-tuning is necessary
Sometimes, you get surprised! I used this green icon to run some #img2img prompts and... I didn't expect to see these results 😂😅
• • •
Missing some Tweet in this thread? You can try to
force a refresh
As soon as the model was trained, the first step was randomly generating a large set of images w. a simple prompt ("golem, detailed, realistic, 3D rendering")
Each golem can be extracted by removing the images' backgrounds (reco: @photoroom_app).
Some of the designs are amazing. However, the golems all look very similar to each other. Let's separate them into categories.
@Tocelot Dalle, Midjourney or StableDiffusion generate exquisite images. However, it’s challenging to reconstruct specific subjects while maintaining high fidelity to their key visual features. Even w dozens of detailed iterations, the outcome is rarely consistent. Prompting is daunting…
I've listened to hundreds of podcasts and interviews about the Metaverse, but this one honestly stands out as one of the best so far. In just 30 min, @AdamDraper explains why NFTs matter and why they will be a key element of the #Metaverse
And because I really enjoyed it, I extracted are some key quotes from the transcript. But you should listen to the video too! And learn about NFTs, the Metaverse, DAOs, identity in the metaverse, and why you want to be part of new technologies that seem like “jokes” at first.
“Scarcity on the internet is gonna be a big deal. People are saying that we're in an NFT bubble. I think people are negative with hope, they hope that it pops so that they can buy in”.
So last WE, I went down into the Catacombs of Paris. If you're not familiar, it's an insane anthill-like network of 200 mi. of galleries and chambers, centuries-old, 60 ft beneath the surface. @NewYorker wrote about it in 2019: newyorker.com/news/dispatch/…. Here's a short story.
1/12
The goal was to 3D scan as many places as possible, using only the iPhone 12 Pro/LiDAR (and two powerful LED lights). And so we spent 12 hrs underground, walking (or crawling) more than 20 miles, and scanning 30 different places using @Scaniverse, @PolycamAI, and @Sitescape
2/12
All scans were uploaded to @Sketchfab and the whole series is available here: sketchfab.com/edemaistre/col…. Interestingly I was able to process all the scans immediately on the device, down below (without service). Cool way to demonstrate the power of LiDAR scanning w the iPhone