While we're here... I'm not a flower specialist. So I asked #ChatGPT to provide me with some ideas of well-known flower species.
Prompt: "list the names of 20 of the most beautiful flowers"
Done.
"Iris flower, pixel art, 8-bit, sRGB, icon"
"Peony flower, pixel art, 8-bit, sRGB, icon"
"Lavender flower, pixel art, 8-bit, sRGB, icon"
Note: it looks like the AI took "lavender" as the color of the flower, not precisely its type (genus/species).
"Hibiscus flower, pixel art, 8-bit, sRGB, icon"
"Chrysanthemum, pixel art, 8-bit, sRGB, icon"
"Clematis flower, pixel art, 8-bit, sRGB, icon
"Poppy flower, pixel art, 8-bit, sRGB, icon"
"Carnation flower, pixel art, 8-bit, sRGB, icon"
"Passion flower, pixel art, 8-bit, sRGB, icon"
I can access all my AI-generated images under the "Images" tab and also via the "Generator" icon itself.
I can easily see the prompts I used, compare batches, or download individual images (more filtering/sorting features are coming shortly).
While not all output images are perfect (some prompts could have been more precise, and some flowers could have looked better), it's still an excellent example of how to use Stable Diffusion finetunes (with @Scenario_gg) to explore a specific graphic direction, consistently.
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
Absolutely loving that this is happening during GDC week π . My schedule's packed with meetings & meetups, so not much time to vibe code, but I spun up a basic demo for a platformer jumping game, in minutes.
This was fully prompted via Claude 3.7 (on the left), zero manual tweaks. Link below π and I'll keep sharing improvements and tips!
2025 is going to be a wild year for AI-powered game devs.
I used @JustinPBarnett's MCP on Github - check it out here
So far, it feels even easier than Blender, and I canβt wait to add more actions, assets, textures, and gameplay!github.com/justinpbarnettβ¦
My main tip so far is that, just like with Blender MCP, you should proceed step by step >> one edit or element at a time.
Otherwise, Claude will go crazy and wil try doing everything at once (and fail).
Here are the key steps to creating stunning turnaround, using #Scenario ()
1/ Train or pick a character model (A).
2/ Optionaly>, pick a style model (B). Use it to create training images for (A), or you can merge both (A + B = C) for example.
3/ Utilize the custom model (A or C) to generate consistent characters. Then select a reference image to produce initial character turnarounds in your desired poses.
4/ Refine these initial outputs using Sketching and image2image.
5/ Select the best result and refine details in the Canvas for maximum consistency.
6/ Finally, upscale your final image (up to 8K resolution.)
@ClaireSilver12 - (I hope you don't mind me RT this for a broader reach and to share it with more users.)
Here's an advanced use case for the IP Adapter. You can adjust or remove the steps depending on the desired output/goal. Bear with me; it's actually quite straightforward.
1 - Train a LoRA on a specific subject (e.g., character).
2 - Blend the LoRA to perfectly capture the style (e.g., comic, cartoon, oil painting, 3D...).
3 - Run inference on that "blended" model.
4 - Select an image that stands out and use it as a reference with the IP Adapter.
5 - Modify the prompt to create variations of the subject.
Let's get started ππ
1/ The first step is to train one (or more LoRA) models on a specific subject (e.g. character or object), or also a style.
The process is straightforward. I'll use the example of the "girl with pink hair" (ππ« ) that I shared before (12 training images)