) > curate a training dataset (such as pictures of figurines) to feed a #Dreambooth finetune.
Once the model is ready, compare different generic prompts (e.g. "low-poly," "3D renderingβ, "ultra-detailed," "pixel art," etc.)
2/ Once a prompt looks good, keep the modifiers (in this case: "3D rendering, highly detailed, trending on Artstation").
And start iterating around variations, such as colors (or pose). Don't over-engineer it, to keep the consistency. You should get this:
3/ A broader view of the red, blue, gold and green battalions :)
4/ Now you can get creative and add more colors (I just prompted "rainbow space marine, colorful" in the set below)
5/ It's not just about colors. The look or style of the models can be altered with other modifiers.
Such as "a dark Halloween zombie space marine". Now all the models have been zombified π§ββοΈππ§ββοΈ. They could also melt or glow in the dark.
6/ I randomly found that one of the soldiers looked like a baby, so I prompted "a cute baby space marine" and made everyone look younger, in seconds:
7/ Because the shape/silhouette of the "baby space marine" was interesting, I kept it and used #img2img to generate pixel art variations, in different colors:
8/ img2img is very powerful when combined with Dreambooth. I went back to my previous tweet (link -
), and selected a "cyclopean golem", to resembling space soldiers, like this
9/ Same story below, but with a different (white) golem.
Also, I added "no gun" as a negative prompt. None of the soldiers has the machine gun they held before.
10/ As a bonus, you can have more fun with your finetune and generate other storytelling visuals.
For example, "urban warfare background" or "a group of space marines dancing in a night club"
11/ Or even find the image of a tank and then run #img2img to generate tank-like space robots/creatures πͺ
12/ I'm hoping this somehow demonstrates the power of having your own "finetune(s)"
Training lets you iterate around a specific concept, style, or object. You get consistent results, faster, and possibly using your training data. No more endless random prompting. What else? :)
end/ I've had some great discussions about it with the gaming community (artists or studios) over the past few days. If you have questions or would like to have a chat, please comment or DM me; happy to connect with innovative game creators.
Here are the key steps to creating stunning turnaround, using #Scenario ()
1/ Train or pick a character model (A).
2/ Optionaly>, pick a style model (B). Use it to create training images for (A), or you can merge both (A + B = C) for example.
3/ Utilize the custom model (A or C) to generate consistent characters. Then select a reference image to produce initial character turnarounds in your desired poses.
4/ Refine these initial outputs using Sketching and image2image.
5/ Select the best result and refine details in the Canvas for maximum consistency.
6/ Finally, upscale your final image (up to 8K resolution.)
@ClaireSilver12 - (I hope you don't mind me RT this for a broader reach and to share it with more users.)
Here's an advanced use case for the IP Adapter. You can adjust or remove the steps depending on the desired output/goal. Bear with me; it's actually quite straightforward.
1 - Train a LoRA on a specific subject (e.g., character).
2 - Blend the LoRA to perfectly capture the style (e.g., comic, cartoon, oil painting, 3D...).
3 - Run inference on that "blended" model.
4 - Select an image that stands out and use it as a reference with the IP Adapter.
5 - Modify the prompt to create variations of the subject.
Let's get started ππ
1/ The first step is to train one (or more LoRA) models on a specific subject (e.g. character or object), or also a style.
The process is straightforward. I'll use the example of the "girl with pink hair" (ππ« ) that I shared before (12 training images)
Simply select "New Model - Train" on . I use 9 images of the model, showcasing various angles and zoom levels, accompanied by concise captions (details below).
This could be the best model I've ever created for generating isometric buildings, on Scenario.
Output consistently match the style I wanted, and the model responds perfectly to (short) prompts, without any reference images needed.
It's a LoRA composition. More below.
Process: it's pretty simple.
I created a LoRA composition from 4β£ distinct LoRA.
(i) - My own "Fantasy Buildings" LoRA
(ii) - Three LoRAs available on #Scenario: "Isometric Storybook", "Stylized Fantasy Iconic Imagery" and "Belgian School Comics".
The influence of each LoRA is below.
My prompt structure was dead simple... less than 10 words!
(type of building/scene), solid color background, highly detailed, centered.