Nick St. Pierre Profile picture
Creative Director and unofficial Midjourney shill. Publicly exploring AI & sharing learnings.
44 subscribers
Jul 15 β€’ 11 tweets β€’ 5 min read
People often think Midjourney is some single shot text-to-image generator, but it's not

its features give you a ton of control over the creative direction

> style refs
> reframing
> repainting
> parameters

let's breakdown a full workflow, using this image as an example🧡 Image first thing I did was find style codes I like and blend them together

I used 2 codes:
--sref 2855100467 (blue, left)
--sref 3111593995 (red, right)

I played with the weights & found that I liked 3.5 parts of the blue code to 1 part red, or --sref 2855100467::3.5 3111593995::1
Image
Image
Jul 11 β€’ 7 tweets β€’ 4 min read
side-by-side examples of how style references impact your image generations in midjourney:

on the left is the image used as a style reference

on the right are the results of the prompt run with & without the style reference

the prompt was: "photo of a woman in the city"

Image
Image
Image
Jun 12 β€’ 6 tweets β€’ 3 min read
Midjourney just released a new feature called 'model personalization'

It lets you tune the MJ algorithm to your own personal tastes, removing much of the MJ "bias" that comes from its training data

Breakdown of how it works: Every time you write a prompt there's a lot that remains 'unspoken'

MJ's algorithms fill in the blank w/ their own 'preferences', which comes with certain biases

Model personalization learns what YOU like so MJ is more likely to fill in the blanks with YOUR tastes Image
Jun 6 β€’ 8 tweets β€’ 5 min read
Elaborating on how to use Midjourney's "Style Reference" feature

This is how you break free of MJs default training data "aesthetic", and fine tune the way it interprets your prompts

Codes & examples πŸ‘‡ Image When you use the style reference feature, you're essentially sending MJ to a specific location in "style space"

Each location has its own unique style, vibe & aesthetic. Once you're there, any prompt you run will be influenced by the locations unique characteristics
Apr 11 β€’ 8 tweets β€’ 3 min read
Here are 7 pretty good Midjourney prompts & images you can riff on: Ilford Delta 3200 closeup portrait --chaos 100 --ar 4:5 --style raw --stylize 1000 --weird 3000 --niji 6 --no lamp Image
Mar 27 β€’ 9 tweets β€’ 3 min read
Text-to-storyboard

I'm really liking the approach @LTXStudio is taking with their video platform

Instead of going clip by clip, you prompt the basic story concept, and it generates an entire storyboard w/ multiple scenes, shots, and even character casting

Full interface tour: You still have full control at the individual clip level, but the fact it takes the basic idea and turns it into a structured storyboard with multiple scenes and shots you can edit is pretty amazing

This is their character consistency feature
Mar 12 β€’ 5 tweets β€’ 2 min read
A few more consistent character tests in Midjourney

top row β†’ test of --cw values on clothing
middle β†’ seeing how it transfers emotions
bottom β†’ using --cref with different actions

prompts & some notes in thread:Image MJ default is --cw 100, but try lower --cw values if you plan to specify an outfit different from your reference img

πŸ’¬ cinestill 800T photo of a woman working at her desk. She's wearing a black shirt and clear lens glasses --ar 5:7 --style raw --cref {img URL} --cw {0,50,100} Image
Mar 12 β€’ 8 tweets β€’ 5 min read
Midjourney finally released their consistent character features!

You can now generate images w/ consistent faces, hair styles, & even clothing across styles & scenes

This has been the top requested feature from the community for a while now

Some examples & how it works: It's similar to the style reference feature, except instead of matching style, it makes your characters match your Character Reference (--cref) image

I used the image on the left as my character reference

Prompts in ALT

Colorized Ilford Delta 3200 closeup portrait --chaos 100 --ar 4:5 --style raw --stylize 1000 --weird 3000 --niji 6
Cinestill 800T film still of a man pitching for the new york yankees --ar 16:9 --style raw --cref {img URL} --cw 100
Cinestill 800T film still of a man at a birthday party --ar 16:9 --style raw --cref {img URL} --cw 50
Feb 16 β€’ 16 tweets β€’ 5 min read
I ran all of the Sora prompts through Midjourney

Interesting how similar some are

side-by-sides against vids: An extreme close-up of an gray-haired man with a beard in his 60s, he is deep in thought pondering the history of the universe as he sits at a cafe in Paris, his eyes focus on people offscreen as they walk as he sits mostly motionless, he is dressed in a wool coat suit coat...
Jan 29 β€’ 4 tweets β€’ 2 min read
Try prompting against your moodboards

You can get really incredible results with super simple prompts when paired with a well curated moodboard

Moodboard on the left, prompted images on the right

Prompts below

Image
Image
Image
prompt 1

I used custom zoom on the generated image to adjust aspect ratio to 16:9 Image
Jan 29 β€’ 13 tweets β€’ 9 min read
You don't need to overthink prompts for design elements like backgrounds/textures/patterns

Most of the time you only need 3-4 words

12 examples in thread: noisy film grain texture --ar 16:9 --style raw --stylize 0 --v 6


Image
Image
Image
Image
Jan 28 β€’ 12 tweets β€’ 7 min read
You can elevate pretty much any Midjourney photo prompt by including a film stock with complementary lighting conditions.

I curated some pairings you can play with

12 examples in thread: Frogs on lily pads in a misty pond at night. Moody ambiance and soft lighting captured on Cinestill 800T film --ar 3:2 --style raw --v 6 Image
Jan 5 β€’ 5 tweets β€’ 2 min read
Today is officially the last day I'll be using Midjourney in Discord!

The only thing that kept me in discord was the ability to organize projects in my own servers

But MJ just pushed a big update to their web alpha that makes staying organized WAY easier

All the new features: ✨ Smart Folders

Back in the day MJ had something called "collections" that let you quickly group images by search term.

Now it's called "Smart Folders" and it works the same way.

Just add your terms and MJ will automatically add any prompt using those terms to your folder.
Jan 1 β€’ 9 tweets β€’ 3 min read
In Midjourney you can use a double colon '::' in your prompt to separate concepts

Some examples to give you a better visual:
'Eggplant" vs 'Egg:: plant'
'Headlight" vs 'Head:: light'
'Snowman" vs 'Snow:: man'

Its called multi-prompting & theres a bunch of ways you can use it πŸ‘‡ Image When multi-prompting, MJ considers each concept separated by the '::' as its own unique prompt > imagines them individually > then blends them

By default, each part separated by '::' has an equal amount of influence over the image

But you can change that by adding a 'weight'πŸ‘‡ Image
Dec 30, 2023 β€’ 11 tweets β€’ 4 min read
Prompting Midjourney --v 6 is VERY different than prompting --v 5

--v 6 understands language much better, which means your punctuation, syntax, & grammar matter much more

If prompted correctly, you can control almost every element in your image

A guide to prompting --v 6 πŸ‘‡ Image πŸ’¬ Prompt setup:

> Set the main scene
> Describe the details
> Describe the setting
> Explore styles & mediums

Adding a basic medium in your initial setup may help you better visualize results as you iterate. Keep it at the start or end of the prompt for best results

Step 1 πŸ‘‡ Image
Dec 27, 2023 β€’ 7 tweets β€’ 3 min read
Attempting to make a little mini-series using only AI.

I'm using midjourney to generate the images, @pika_labs to generate the video.

It ended up taking me about 3 hours to generate 60 seconds of footage.

This 16-sec clip used only 6 images.

prompts & images in thread πŸ‘‡ a cave, long exposure experimental cinematography of holographic hallucinations against mysterious backdrops, photobashing of conceptual photography, god rays --ar 7:5 --style raw --v 6.0

Then some variations Image
Dec 21, 2023 β€’ 29 tweets β€’ 12 min read
Midjourney v6 is finally here!!!! πŸ”₯

Here are some side-by-sides, --v 5.2 versus --v 6, as well as some new highly detailed prompts and camera angle tests.

These are all unaltered and unedited, straight out of Midjourney.

v6 is a HUGE leap forward

Prompts & examples πŸ‘‡ Image a corner bar with a neon sign that says "open late"

--v 5.2 (left)
--v 6 (right)
Image
Image
Dec 20, 2023 β€’ 17 tweets β€’ 5 min read
Midjourney had their second v6 rating party.

These are supposed to be the "better" images.

After this they'll do a final tuning and then FINALLY release v6!

Prompts & examples from rating party #2πŸ‘‡ Like last time, I picked img's I felt fit the prompts or were funny

30 year old Auburn bearded male Figure standing at a studio desk in the corner of a messy eclectic art studio watching the sun rise out of large multi-paned historic windows drinking steamy coffee cozy and happy Image
Dec 16, 2023 β€’ 40 tweets β€’ 10 min read
We finally got our first look at v6 images πŸ‘€

Midjourney had their v6 rating party. They say these are the "bad" images. Next rating party will be the "good" ones.

I rated over 1000 images & inspected them to analyze the prompts.

v6 is gonna be insane

A bunch of examples πŸ‘‡ Remember, these are the "bad" images.

I selected ones I feel highlight v6s interpretation of prompts & details.

-male with long curly blonde hair with wide shoulders at an 80's diner, looking at the back of his head, darkness and fog outside the windows pink and white and red Image
Nov 15, 2023 β€’ 11 tweets β€’ 3 min read
So far in November:

Grok from @xai
GPTs from @OpenAI
FigJam AI from @figma
Style Tuner from @midjourney
3D animation from @pika_labs
Real-time AI gen from @krea_ai
Motion Brush from @runwayml
Text-to-3D from @LumaLabsAI
Splat support from @splinetool

Examples below ICYMI: Grok cookin
Oct 13, 2023 β€’ 16 tweets β€’ 5 min read
I'm running a ton of benchmarking tests for Midjourney vs. Adobe Firefly 2 vs. DALLE-3.

Testing for speed, accuracy, quality, diversity, robustness, handling of errors, and more.

I'll be sharing my findings over the next few days. Starting with...

πŸͺ„ Robustness (details below) Image Here's an overview of what I'll be covering in this series. To follow the series, follow @nickfloats

I ran the prompts several times for each test to account for randomness & variability.

I've included 16 outputs for each model & prompt so you can better visualize the results.