The model was trained on just 11 pictures (!), with only 1500 training steps, which tuned out to be quick (20 min).
As before, the first step is to "explore“ the model with a few generic prompts. The goal is to find the modifiers that will keep a consistent style going forward.
Once the "stable modifiers" are found, it's time to select some of the best output and remove the background when needed.
"A dwarf, detailed, trending on Artstation, Clash of Clans"👇
You can add accessories or pose, for example: "A dwarf lord sitting on a throne". You get images with the throne... and the crown.
This is a group of dwarves that bear an axe 🪓 (... supposedly - since none of the pictures in the training dataset had an axe, I'm not getting the best results here)
Here's another group, this time bearing "a sword and a shield". A little bit better.
Here's a group of baby dwarves, on a baby chair
You can play with expressions too. Include "laughing" in the prompt, and you'll get characters like the ones below (male or female, btw)
Of course, I had to try my zombie trick: "a zombie dwarf, trending on Artstation, clash of clans".
Zombify everything.
I made some elves... in the style of the dwarves
Let's not forget img2img. this was a test to reproduce Russel Crowe's famous moment in Gladiator... "Are you not entertained?"
You can change the colors of the clothes simply by adjusting the prompt: green, blue, red... or bare chest
And then there's always the "fun and creative" stuff like having your characters play cricket 🏏, basketball 🏀, curling 🥌 or bobsled 🛷
This is not just about single characters, they can be generated in groups ("Dwarves in a gold mine, detailed, trending on Artstation")
And I could keep going on and on, but some other datasets are waiting :)
While images might not be final, this can accelerate design or prototyping processes. I also see RPG communities using this to build entire worlds "in the style of the game", without erratic prompting.
If you like this thread, please feel free to like/RT or share any thoughts below.
Follow for more explorations on how to use #StableDiffusion and #Dreambooth to accelerate your game creation process 🚀
From multiple consistent objects within a single image to fully recreated 3D objects in Blender.
100% AI-generated.
Workflow detailed below 👇
Step 1/
Generate a grid of 6 consistent objects. For this, I used @Scenario_gg with the "Juicy Icons" model, which consistently creates cartoon-style, simplified 3D icons arranged in a grid.
Absolutely loving that this is happening during GDC week 😅. My schedule's packed with meetings & meetups, so not much time to vibe code, but I spun up a basic demo for a platformer jumping game, in minutes.
This was fully prompted via Claude 3.7 (on the left), zero manual tweaks. Link below 👇 and I'll keep sharing improvements and tips!
2025 is going to be a wild year for AI-powered game devs.
I used @JustinPBarnett's MCP on Github - check it out here
So far, it feels even easier than Blender, and I can’t wait to add more actions, assets, textures, and gameplay!github.com/justinpbarnett…
My main tip so far is that, just like with Blender MCP, you should proceed step by step >> one edit or element at a time.
Otherwise, Claude will go crazy and wil try doing everything at once (and fail).
Here are the key steps to creating stunning turnaround, using #Scenario ()
1/ Train or pick a character model (A).
2/ Optionaly>, pick a style model (B). Use it to create training images for (A), or you can merge both (A + B = C) for example.
3/ Utilize the custom model (A or C) to generate consistent characters. Then select a reference image to produce initial character turnarounds in your desired poses.
4/ Refine these initial outputs using Sketching and image2image.
5/ Select the best result and refine details in the Canvas for maximum consistency.
6/ Finally, upscale your final image (up to 8K resolution.)