Daniel and I worked on a specific dataset of stylized characters (a style exploration, made a few years ago).
I focused on the characters below (removed the weapons, beast, and badges) and trained a finetune using @Scenario_gg on 7 images.
It started really simple, and I kept a basic prompt for most of the exploration.
"A character". Just this. And I ran a dozen batches.
Most of the output were OK (80%-ish) but then I removed the weird ones.
And here you go. 81 characters, AI-generated from @DanielPlaychain's art.
Quick comparison of the closest original asset, and a randomly AI-generated asset:
Some other close-up views (AI-generated too)
Then I tried making other variation by "forcing" the AI to draw a female character (elf-like)
"Character, elf, female"
Or an orc ("Character, orc")
Or a wizard 🧙
I generated variations around the "gremlin/little creature" using img2img
Made more orc-like creatures, also with img2img.
And even tried getting other shapes/silhouettes, still using img2img.
This was a very quick - and yet quite exciting experiment.
In that particular case, the output might not be final art (i.e. production-ready). Yet, it's a simple demonstration of how artists can use their own work to explore more creative options for their customers.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Absolutely loving that this is happening during GDC week 😅. My schedule's packed with meetings & meetups, so not much time to vibe code, but I spun up a basic demo for a platformer jumping game, in minutes.
This was fully prompted via Claude 3.7 (on the left), zero manual tweaks. Link below 👇 and I'll keep sharing improvements and tips!
2025 is going to be a wild year for AI-powered game devs.
I used @JustinPBarnett's MCP on Github - check it out here
So far, it feels even easier than Blender, and I can’t wait to add more actions, assets, textures, and gameplay!github.com/justinpbarnett…
My main tip so far is that, just like with Blender MCP, you should proceed step by step >> one edit or element at a time.
Otherwise, Claude will go crazy and wil try doing everything at once (and fail).
Here are the key steps to creating stunning turnaround, using #Scenario ()
1/ Train or pick a character model (A).
2/ Optionaly>, pick a style model (B). Use it to create training images for (A), or you can merge both (A + B = C) for example.
3/ Utilize the custom model (A or C) to generate consistent characters. Then select a reference image to produce initial character turnarounds in your desired poses.
4/ Refine these initial outputs using Sketching and image2image.
5/ Select the best result and refine details in the Canvas for maximum consistency.
6/ Finally, upscale your final image (up to 8K resolution.)
@ClaireSilver12 - (I hope you don't mind me RT this for a broader reach and to share it with more users.)
Here's an advanced use case for the IP Adapter. You can adjust or remove the steps depending on the desired output/goal. Bear with me; it's actually quite straightforward.
1 - Train a LoRA on a specific subject (e.g., character).
2 - Blend the LoRA to perfectly capture the style (e.g., comic, cartoon, oil painting, 3D...).
3 - Run inference on that "blended" model.
4 - Select an image that stands out and use it as a reference with the IP Adapter.
5 - Modify the prompt to create variations of the subject.
Let's get started 👇👇
1/ The first step is to train one (or more LoRA) models on a specific subject (e.g. character or object), or also a style.
The process is straightforward. I'll use the example of the "girl with pink hair" (😊🫠) that I shared before (12 training images)