Roope Rainisto Profile picture
AI Artist: "LIWA" + more. WME represented. Designer, creator, photographer, screenwriter, endless learner. Life is short, Just Do It!

Oct 25, 2022, 10 tweets

Dreambooth training for Stable Diffusion is extremely powerful. You can train a new token to the "person" class to create very convincing looking images of them. I've posted some examples in the past days.

But it's not the coolest thing you can do...

You can train the "style" class to create new styles. For instance the "arcane" style is well known - it skews all results to this particular Riot games style.

But - that's not the coolest thing you can do...

It's ridiculous, but you can just train the "style" class to be photographic. And you get back results that - well, look photographic.

EVERYTHING BECOMES PHOTOGRAPHIC.

#stablediffusion

It's the one and same Dreambooth, trained for photographic style.

I didn't train the cars, I didn't train the vegetables, I didn't train the offices, I didn't train the landscapes.

All of that stuff is already in the SD model. The style just makes them photographic!

I'm not going to share you my trained model, but I'll tell you how to train your own. Use JoePenna's Dreambooth. github.com/JoePenna/Dream…

Pick training material that is photographic. Invent a token, pick "style" class, train enough. Try different training sets.

It's not even super sensitive to the training material. I've trained two photorealistic styles using different materials. You can fine-tune the style by the training material. I use about 20 training images myself.

All the content is in the model already.
#dreambooth

You just prompt usually: "Car on fire, <any modifiers that you like>, zyxyourstylename style"

Modifiers = "trending on artstation" or whatever you like. You can use the automatic1111 prompt modifiers to emphasize or de-emphasize things as you see fit.

Here's what my Lightroom looks like. It's all done using the same style class. You don't like this particular hipster-ish photographic style? Train your own!

#stablediffusion #lightroom

This all started from the vegetables. I made funny pictures of a model eating vegetables. Thought that the vegetables look oddly good, prompted for them only. And realized it does this style no matter what I prompt it to do. I'm so dumb I didn't at first even try!

Note #1: I did my Dreambooth training with the SD1.5 model. I'm fortunate enough to have a A6000 card, the training hits 32GB VRAM, at least a few days ago when I did this. I haven't tried this same for SD1.4, I can't say how much better or worse the results are with it.

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling