Trist Profile picture
Mar 26 11 tweets 5 min read
#Midjourney does not reliably understand lenses.

However, at least in an #AICinema context, you can use shot sizes to approximate the lens.

🧵 Here's how ...

(prompts ➡️ ALT)
#AIArtCommuity #AiArt film still, astronaut in the jungle, close up --ar 3:2 --see
Basically, you can choose between these lenses:

🏔 Wide-angle lenses (10mm to 35mm)
🕺 Standard lenses (35mm to 85mm)
👀 Telephoto lenses (85mm to 300mm)

They often correspond to these shots:

Wide angle (wide-angle)
Medium to medium close-up (standard)
Close-up (tele)
Prompting for "close-up", getting medium close-ups ... 😊
We wanted tele, but are probably a bit off toward standard. However, given the depth of field and compression, closer to 80mm than to 50mm. Fair enough. film still, astronaut in the jungle, close up --ar 3:2 --see
We just go one step further and ask for an "extreme close-up", ending up with a close-up shot. film still, astronaut in the jungle, extreme close up --ar 3
If we wanted to go further, we could use different variants of repetition to get to actual extreme close-ups: film still, astronaut in the jungle, extreme close up, extrefilm still, astronaut in the jungle, extreme extreme close u
Medium shot. I'd say right on target. film still, astronaut in the jungle, medium shot --ar 3:2 --
Wide angle. Again, a bit too narrow, probably still in the standard lens range? film still, astronaut in the jungle, plants and large trees
Enforcing with repetition and we get a more distorted wide-angle shot. film still, astronaut in the jungle, plants and large trees
It's probably also possible to reproduce these effects in studio settings with negative prompting and weights. I will look into this here: medium.com/@tristwolff wide angle shot of an astronaut in a photo studio --ar 3:2 -
For more shot sizes, camera types, prompting for props and other cinematic prompt gimmicks, check my three-part tutorial on "Cinematography with #Midjourney"
medium.com/design-bootcam…
If you want to follow these explorations into AI & creativity, follow me: @tristwolff

Or sign up for my weekly newsletter:
fabulous-maker-2733.ck.page/da150f448e

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Trist

Trist Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tristwolff

Mar 26
Spending Sunday with #Midjourney ... 🧵 Image
ImageImage
Image
Read 8 tweets
Mar 25
😮 So excited that a major TV production company approached me to help them develop prompts & workflows for story development using #GPT4. 🚀

⏰ For #screenwriters & #filmmakers, this is a wake-up call

🧵
💰💻💡 The industry is eager to save money & resources /w AI efficiency + #GPT4 is leading the charge. I expect production processes to be changed within a year.

The infamous #OpenAI study has shown: Creative tasks are the most exposed to AI automation

The consequences of AI are real, and they're coming to town. Now, how can AI improve the creative process? By helping with structural groundwork, for example:
medium.com/design-bootcam…
#AI #storytelling
Read 6 tweets
Mar 21
New paper on #GPT4 & "Labor Market Impact" is out.

It's an early evaluation, but creatives should be paying attention to this ... 👀

🧵 1/5 Image
All the jobs that show 0 exposure to being automated by LLMs are defined by

- blue-collar, manual labor
- outdoor or non-office
- lower educational requirements
- use of specialized tools or equipment

As we'll see: jobs /w higher educational requirements ➡️ high exposure 🚨 2/5 Image
☝️In fact, if your job is defined by

- creative or intellectual labor
- office or indoor work
- high educational requirements

it's much more exposed to being replaced by #GPT (alpha), or #GPT-based apps (beta & theta):
3/5 Image
Read 8 tweets
Mar 21
AI text-to-video is now available as an open-source model 🚀

It's still in its early stages & nowhere near as mature as #Midjourney, but it's definitely worth exploring.

Here's how it works & how you can experiment with it.

🧵
#Modelscope #text2video #AIArt #AiArtCommunity
The model is simply called “text-to-video synthesis”. A brief summary:

- 1.7 billion parameters 🔥
- training data includes public datasets like LAION5B (5,85 billion image-text pairs), ImageNet (14 million images) & WebVid (10 million video-caption pairs) 🌎
- open source 💪
Text-to-video synthesis consists of three sub-networks that work together to produce short MP4 video clips:

- a text feature extraction,
- a text feature-to-video diffusion model,
- and a video-to-video diffusion model.

🚀🚀🚀
Read 11 tweets
Mar 16
Some #Midjourney V5 findings 🔥
Finally! Full-blown cinematic aspect ratios!

2.4:1 widescreen cinema standard ✅
2.76:1 ultra panavision 70 ✅

you can go up to 14:1 (!) but: letterboxing seems to set in roughly at ultra widescreen (3.6:1)

#midjourney5 🧵
However, 3.6:1 (and higher ratios) seems to work better if you drop the cinematic prefixes (cinematic shot, film still, etc.) 🤷‍♂️🤔
Here it's just scene & style description. 3.6:1, no letterboxing.
despite the letterboxing, exploring 4:1 is fun ... 🍄🎉
#MidjourneyV5
Read 10 tweets
Mar 15
Built this in 1.5 hrs w/ #GPT4 & #Midjourney.
I only have basic JS skills.

The possibilities of 🤖 &🧠 co-creation are mind-boggling. 🎉

Graphics: 🤖 images &🧠 curation
Code: 🤖 HTML/CSS/JS &🧠 co-debugging
Music: 🤖 Samples &🧠 sequencing

Here's how: 🧵

@RedSquidProject
I generated the images for the game with #midjourney
Prompted for basic 1:1 images, using the tried and tested combination of "white background" and "--no background" to prep transparent PNGs
Then I used #GPT4 via #ChatGPT & gave it the basic story for the game: Squid-Spaceship having to collect little baby squid astronauts, etc...

Asked the AI to come up with HTML/JS code & CSS, then helped it to debug & adjust mechanics (smooth flight paths, etc..)

And done. ✅ 🤯
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(