I actually generated more than 200 buildings (and even some vehicles), from which I picked a smaller dataset, keeping enough variability within a certain consistent style.
I will keep the remaining data for new trainings in the coming days :)
Once the model was trained, I tried a few simple prompts to evaluate which type of isometric buildings it could generate.
Such as... a nuclear plant (not really in the original training dataset)
Or a factory
A soviet bunker.
A radar dome.
A garage...
A refinery.
And finally... the bunker. An easy example, in the style of a "pillbox" in Command & Conquer.
This board was generated using #img2img, from one initial image (to the right, from the original training dataset).
I tried other "shapes" for a bunker, such as the ones below (taking the “factory“ image as the input data to img2img)
It worked too, but maybe not as well as the one above.
So I re-trained the model with two differences > reducing the dataset to 12 images (to increase the consistency, at the risk of lowering the variability). I also set the text encoder at 50% (vs. 100%)
And it worked much better. Here's a first bunker (tower-shaped)
Back to the "pillbox" shape.
The pillbox looked good, so I customized it as if it was a soviet bunker.
"isometric bunker, realistic, soviet flag, red, video game".
Boom done.
Of course, there has to be the "allied" counterpart.
"isometric bunker, realistic, USA flag, blue, allied, video game"
I changed the original image to generate bunker with a wider angle (and some structures around).
"isometric bunker, realistic, video game".
That's it. 3 words, a good fine-tune, a curated image (for img2img) and the possibilities are just infinite.
Close-up views
"Dataset engineering" >>> "prompt engineering"
Another img2img, another shape, another style... but the same "Command and Conquer-like" universe.
And another style of bunker or "command tower".
For this img2img batch, I used an ATC tower as an input (right)
The #AI transformed the control tower into a vertical bunker (and kept some of the original visual features)
I even explored other buildings, such as a #lighthouse, always in the same style (generated from the fine-tune)
This is just infinitely powerful. ESPECIALLY for artists with all the creativity, knowledge, and culture of gaming art.
Once you master different features (training finetunes, prompts, img2img, inpainting...), then the possibilities are just endless.
I predict game #studios will end up managing hundreds (if not thousands) of finetuned models, which will undergo some validation process before being used in production by various teams (artists, developers, designers, marketers...)
And if you still doubt it, this is a quick example of the SAME concept and methodology, but this time applied to the "Fallout" video game (post-apocalyptic #RPG)
A radar dome.
A radar dish/antenna.
A decaying building, etc etc.
If you like this concept, RT this thread, give us a follow (@Scenario_gg), or get on the waitlist (scenario.gg) 🚀
We start rolling out in 10-15 days and after.
And let us know what you'd like to see next!
🤝
• • •
Missing some Tweet in this thread? You can try to
force a refresh
From multiple consistent objects within a single image to fully recreated 3D objects in Blender.
100% AI-generated.
Workflow detailed below 👇
Step 1/
Generate a grid of 6 consistent objects. For this, I used @Scenario_gg with the "Juicy Icons" model, which consistently creates cartoon-style, simplified 3D icons arranged in a grid.
Absolutely loving that this is happening during GDC week 😅. My schedule's packed with meetings & meetups, so not much time to vibe code, but I spun up a basic demo for a platformer jumping game, in minutes.
This was fully prompted via Claude 3.7 (on the left), zero manual tweaks. Link below 👇 and I'll keep sharing improvements and tips!
2025 is going to be a wild year for AI-powered game devs.
I used @JustinPBarnett's MCP on Github - check it out here
So far, it feels even easier than Blender, and I can’t wait to add more actions, assets, textures, and gameplay!github.com/justinpbarnett…
My main tip so far is that, just like with Blender MCP, you should proceed step by step >> one edit or element at a time.
Otherwise, Claude will go crazy and wil try doing everything at once (and fail).
Here are the key steps to creating stunning turnaround, using #Scenario ()
1/ Train or pick a character model (A).
2/ Optionaly>, pick a style model (B). Use it to create training images for (A), or you can merge both (A + B = C) for example.
3/ Utilize the custom model (A or C) to generate consistent characters. Then select a reference image to produce initial character turnarounds in your desired poses.
4/ Refine these initial outputs using Sketching and image2image.
5/ Select the best result and refine details in the Canvas for maximum consistency.
6/ Finally, upscale your final image (up to 8K resolution.)