At first, I generated a set of 38 "random" spellbooks, which I used to train a finetune model of Stable Diffusion (Dreambooth). It took 1 hr.
I then explored a set of possibilities with simple prompts and or limited modifiers (intricate, detailed, beautiful, 3D render...)
One word can make a difference!
In the example below, I added "realistic" (on the right). The cover is less intricate, more simplistic, and primarily leather.
Find and save your favorite modifiers (massive databases are being shared, by the way).
If you're #modding a game and need to fit into the initial art, try typing the game's name. In the example below, I added "illustration dungeons and dragons"
The style usually changes accordingly ("simpler", in that case)
You can even test a combination of two or more games :)
When I like a specific item, I don't hesitate to generate a bunch of close variants (16 variants in 30 secs).
Some variants (left) can look better than the original item (right)
Now instead of random variants, you can easily change the main color, by adding it in the prompt
Such as "purple" or "green" below
Or "blue and orange", "pink and black", etc...
Aside from colors, I often try adding details and ornaments to the item(s).
These are spell books with "a volcano", "an island", "a skull", and "an eye" on the cover. All done in seconds
Think about the time needed to draw each concept manually...
One direction I took when exploring this collection was to specifically play on the materials.
For example, "grimoire, precious metals and gemstone, stylized, natural, game icon, digital illustration, hdr"
One of these "precious metals and gemstone" variants was cool, so I generated 64 variants
... but I could do 400 variants too!
This tool is fantastic for games that need a lot of unique content (such as web3 games, where users value unique/rare assets)
If gemstones aren't the right fit, it's easy to change the materials by changing the prompt.
Let's do "grimoire, gold, silver, bronze, wood, stylized, natural, game icon, digital illustration"
The gemstones are gone, replaced by gold ornaments.
Another way to test variations is to leverage #img2img with random images or sketches on the web (or self-drawn). Below are examples of the SAME prompt (precious metals and gemstone) but different #img2img input.
And (as I had quite a few questions over the past few weeks) >> we're going to make this available to anyone that wants to build great content for games (game artists, game developers, studios, indie devs, art directors, etc)
100% online, no technical skills required
Stay tuned
PS: if you like it or want to support us, feel free to RT, we would really appreciate it!
And let us know what we should be sharing next :)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
From multiple consistent objects within a single image to fully recreated 3D objects in Blender.
100% AI-generated.
Workflow detailed below 👇
Step 1/
Generate a grid of 6 consistent objects. For this, I used @Scenario_gg with the "Juicy Icons" model, which consistently creates cartoon-style, simplified 3D icons arranged in a grid.
Absolutely loving that this is happening during GDC week 😅. My schedule's packed with meetings & meetups, so not much time to vibe code, but I spun up a basic demo for a platformer jumping game, in minutes.
This was fully prompted via Claude 3.7 (on the left), zero manual tweaks. Link below 👇 and I'll keep sharing improvements and tips!
2025 is going to be a wild year for AI-powered game devs.
I used @JustinPBarnett's MCP on Github - check it out here
So far, it feels even easier than Blender, and I can’t wait to add more actions, assets, textures, and gameplay!github.com/justinpbarnett…
My main tip so far is that, just like with Blender MCP, you should proceed step by step >> one edit or element at a time.
Otherwise, Claude will go crazy and wil try doing everything at once (and fail).
Here are the key steps to creating stunning turnaround, using #Scenario ()
1/ Train or pick a character model (A).
2/ Optionaly>, pick a style model (B). Use it to create training images for (A), or you can merge both (A + B = C) for example.
3/ Utilize the custom model (A or C) to generate consistent characters. Then select a reference image to produce initial character turnarounds in your desired poses.
4/ Refine these initial outputs using Sketching and image2image.
5/ Select the best result and refine details in the Canvas for maximum consistency.
6/ Finally, upscale your final image (up to 8K resolution.)