Nick St. Pierre Profile picture
Feb 8, 2023 24 tweets 6 min read Read on X
From initial idea → Final image

A thread of every image generated over the life of a prompt, in the order they rendered in Midjourney.

I just started a YouTube channel yesterday where I'll be exploring more AI stuff like this. Link in bio.

#midjourney #synthography

🧵Gen 0 Image
🧵Gen 1 Image
🧵Gen 2 Image
🧵Gen 3 Image
🧵Gen 4 Image
🧵Gen 5 Image
🧵Gen 6 Image
🧵Gen 7 Image
🧵Gen 8 Image
🧵Gen 9 Image
🧵Gen 10 Image
🧵Gen 11 Image
🧵Gen 12 Image
🧵Gen 13 Image
🧵Gen 14 Image
🧵Gen 15 Image
🧵Gen 16 Image
🧵Gen 17 Image
🧵Gen 18 Image
🧵Gen 19 Image
🧵Gen 20 Image
🧵Gen 21 Image
🧵Gen 22 Image
🧵Gen 23 Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Nick St. Pierre

Nick St. Pierre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @nickfloats

Dec 18
I ran a bunch of random prompts through Google Veo 2

It's the best text-to-video model out right now

some non cherry-picked results:
An over-the-shoulder medium shot of an artist working quietly in a cramped, dimly lit attic studio, illuminated by a single overhead lamp. The painter’s brush gently glides over the canvas, each stroke reflected in their intense, focused gaze.
A front-facing medium shot inside a bright, minimal studio with white floors and walls. A ballet dancer moves in slow motion, each graceful leap and spin captured with crisp, high-key lighting.
Read 13 tweets
Jul 15
People often think Midjourney is some single shot text-to-image generator, but it's not

its features give you a ton of control over the creative direction

> style refs
> reframing
> repainting
> parameters

let's breakdown a full workflow, using this image as an example🧵 Image
first thing I did was find style codes I like and blend them together

I used 2 codes:
--sref 2855100467 (blue, left)
--sref 3111593995 (red, right)

I played with the weights & found that I liked 3.5 parts of the blue code to 1 part red, or --sref 2855100467::3.5 3111593995::1
Image
Image
next i tested the global weight of my new blend against a simple prompt that included the general medium I was after

even when using style references, your prompt still has a major impact on the style

to test, I ran "painting of a woman in the city --sw {25,50,75,100,250,500} Image
Read 11 tweets
Jul 11
side-by-side examples of how style references impact your image generations in midjourney:

on the left is the image used as a style reference

on the right are the results of the prompt run with & without the style reference

the prompt was: "photo of a woman in the city"


Image
Image
Image


Image
Image
Image
Read 7 tweets
Jun 12
Midjourney just released a new feature called 'model personalization'

It lets you tune the MJ algorithm to your own personal tastes, removing much of the MJ "bias" that comes from its training data

Breakdown of how it works:
Every time you write a prompt there's a lot that remains 'unspoken'

MJ's algorithms fill in the blank w/ their own 'preferences', which comes with certain biases

Model personalization learns what YOU like so MJ is more likely to fill in the blanks with YOUR tastes Image
Right now model personalization learns from votes in pair ranking & images that you like from the explore page

You need to have roughly 200 pair rankings / likes in order for the feature to work

You can see how many ratings you have by typing /info or going to 'tasks' on alpha Image
Read 6 tweets
Jun 6
Elaborating on how to use Midjourney's "Style Reference" feature

This is how you break free of MJs default training data "aesthetic", and fine tune the way it interprets your prompts

Codes & examples 👇 Image
When you use the style reference feature, you're essentially sending MJ to a specific location in "style space"

Each location has its own unique style, vibe & aesthetic. Once you're there, any prompt you run will be influenced by the locations unique characteristics
It's a far more visual & interesting way of working in MJ

To navigate style space, you'll need:
> a style "code", or
> an image reference

Whether you use a code or an image doesn't really matter. They are effectively the same thing – coordinates to a particular "style"
Read 8 tweets
Apr 11
Here are 7 pretty good Midjourney prompts & images you can riff on:
Ilford Delta 3200 closeup portrait --chaos 100 --ar 4:5 --style raw --stylize 1000 --weird 3000 --niji 6 --no lamp Image
PET scan of an alien, in the style of positron emission tomography imaging --ar 2:3 --v 6 Image
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(