Dreambooth training for Stable Diffusion is extremely powerful. You can train a new token to the "person" class to create very convincing looking images of them. I've posted some examples in the past days.
But it's not the coolest thing you can do...
You can train the "style" class to create new styles. For instance the "arcane" style is well known - it skews all results to this particular Riot games style.
It's the one and same Dreambooth, trained for photographic style.
I didn't train the cars, I didn't train the vegetables, I didn't train the offices, I didn't train the landscapes.
All of that stuff is already in the SD model. The style just makes them photographic!
I'm not going to share you my trained model, but I'll tell you how to train your own. Use JoePenna's Dreambooth. github.com/JoePenna/Dream…
Pick training material that is photographic. Invent a token, pick "style" class, train enough. Try different training sets.
It's not even super sensitive to the training material. I've trained two photorealistic styles using different materials. You can fine-tune the style by the training material. I use about 20 training images myself.
All the content is in the model already. #dreambooth
You just prompt usually: "Car on fire, <any modifiers that you like>, zyxyourstylename style"
Modifiers = "trending on artstation" or whatever you like. You can use the automatic1111 prompt modifiers to emphasize or de-emphasize things as you see fit.
Here's what my Lightroom looks like. It's all done using the same style class. You don't like this particular hipster-ish photographic style? Train your own!
This all started from the vegetables. I made funny pictures of a model eating vegetables. Thought that the vegetables look oddly good, prompted for them only. And realized it does this style no matter what I prompt it to do. I'm so dumb I didn't at first even try!
Note #1: I did my Dreambooth training with the SD1.5 model. I'm fortunate enough to have a A6000 card, the training hits 32GB VRAM, at least a few days ago when I did this. I haven't tried this same for SD1.4, I can't say how much better or worse the results are with it.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Hey did you know that Flux can do more than just photo models!
Flux 1 dev with 3 Loras + Magnific + Luminar for film grain
View end of🧵 for details. @bfl_ml
For these images, I didn't run the ultimate upscale step so they're a bit rougher in overall appearance (which I think fits).
If you want them smoother, run the Ultimate SD step before @Magnific_AI / your favourite upscaler.
With the settings I used here, 50% of the generations are "too broken weird" and not posted here but who doesn't like weird (there's some awesomeness in there I will use for other purposes)
Featuring my upcoming 500-piece main collection "REWORLD" (Day 1) and the fantastic post-photography group collection from 10 super-talented AI artists (Day 2), all pieces curated by yours truly.
Trained a new model for 'realistic photos' - whatever that ever means. It works pretty well.
Imagine where the quality will be in a year. #stablediffusion#portrait