Trist Profile picture
Feb 10, 2023 โ€ข 11 tweets โ€ข 6 min read โ€ข Read on X
With #midjourney V4 supporting aspect ratios up to "2:1", here's how you can use them effectively in your #AICinema storytelling. ๐ŸŽฅ๐Ÿ”ฅ

๐Ÿงต๐Ÿ‘‡
Besides being technical requirements for screen sizes, ratios can also be used to create an aesthetic backdrop for your stories. ๐ŸŽฌ

E.g. 4:3 (or "1.33:1" in cinema slang) is the "classic ratio" of TV monitors & perfect for scenes that are meant to evoke an "analog TV" vibe
"11:8" ("1.375:1" in cinema slang) also called "Academy ratio" is a standard introduced in 1932.

It's outdated, but still in use as an aesthetic tool. E.g. Wes Anderson used it to differentiate timelines in "Grand Budapest Hotel":
Here's an "11:8" image put on a "16:9" screen. Perfect for flashbacks (not only to the 30s) and #AIArt

Prompt: "film still, astronaut in the desert, style by 1930s sci-fi movie, 8k --ar 11:8"
16:9 aka Standard Widescreen, most people nowadays are very well conditioned to experience 16:9 as a โ€œcinematic formatโ€.

Prompt: "film still, astronaut in the desert --ar 16:9"
1.85:1 aka The US Widescreen Cinema Standard
Hollywood's 1950s invention to combat home TV with the โ€œ1.85:1โ€ film experience.

Midjourney only allows integer values for ratios, so instead of โ€œ1.85:1โ€ we can use โ€œ37:20โ€ or โ€œ185:100โ€.

Has a slightly more cinematic vibe than 16:9.
"14:9" is mainly used to broadcast widescreen material (e.g. 16:9) to analog TV screens (4:3).

Try projecting "14:9" on a "4:3" for a cozy cinematic TV vibe.

Also a nice passe-partout effect for #AIArtwork #aiartcommunity
Midjourney's current max ratio of "2:1" is actually a proposal for a new universal standard for cinema & digital media by Italian cinematographer Vittorio Storaro.

("2:1" approx. the mathematical average of cinema widescreen "2.2:1" and TV widescreen "16:9")
๐ŸŽฅ๐ŸŽฌ Note that there's this convention among film people to set "1" as the value for the height of an image. So you will see stuff like "1.85" as aspect ratio, short for "1.85:1"... equivalent to 37:20. ๐Ÿคทโ€โ™‚๏ธ
Some pretty cool widescreen ratios are not supported by #MidjourneyV4 yet, e.g. "2.2:1" cinema widescreen or "2.39:1" Bollywood movie standard etc.

Here's hope it will be added in the future. Meanwhile, V3 does render Bollywood style ๐Ÿ˜Š๐Ÿ’ƒ
I am putting together a couple of tutorials about #midjourney & #cinematography over at @Medium, would be happy if you check them out and let me know if they're helpful! ๐Ÿ™

medium.com/design-bootcamโ€ฆ

medium.com/design-bootcamโ€ฆ

โ€ข โ€ข โ€ข

Missing some Tweet in this thread? You can try to force a refresh
ใ€€

Keep Current with Trist

Trist Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tristwolff

Mar 28, 2023
1/ ๐Ÿš€ Runway's groundbreaking #Gen1 is here!

@runway has the tech, the network, and the community to reshape the online video landscape.

Here's why we should get ready...

๐Ÿงต๐Ÿ‘€
2/ ๐Ÿ’ก @runway has been developing AI video tools since 2018 and garnered support from content creators and major movie/TV studios. ๐ŸŽฌ

And in 2021, they teamed up with @LMU_Muenchen to develop the first version of #stablediffusion ๐Ÿคฏ

technologyreview.com/2023/02/06/106โ€ฆ

#AICinema #AiArt
3/ ๐ŸŒ During #Gen1's development, @runwayml partnered with researchers, filmmakers, and an amazingly creative community ๐Ÿš€ #aicinema

Read 8 tweets
Mar 26, 2023
Spending Sunday with #Midjourney ... ๐Ÿงต
Read 8 tweets
Mar 26, 2023
#Midjourney does not reliably understand lenses.

However, at least in an #AICinema context, you can use shot sizes to approximate the lens.

๐Ÿงต Here's how ...

(prompts โžก๏ธ ALT)
#AIArtCommuity #AiArt film still, astronaut in the jungle, close up --ar 3:2 --see
Basically, you can choose between these lenses:

๐Ÿ” Wide-angle lenses (10mm to 35mm)
๐Ÿ•บ Standard lenses (35mm to 85mm)
๐Ÿ‘€ Telephoto lenses (85mm to 300mm)

They often correspond to these shots:

Wide angle (wide-angle)
Medium to medium close-up (standard)
Close-up (tele)
Prompting for "close-up", getting medium close-ups ... ๐Ÿ˜Š
We wanted tele, but are probably a bit off toward standard. However, given the depth of field and compression, closer to 80mm than to 50mm. Fair enough. film still, astronaut in the jungle, close up --ar 3:2 --see
Read 11 tweets
Mar 25, 2023
๐Ÿ˜ฎ So excited that a major TV production company approached me to help them develop prompts & workflows for story development using #GPT4. ๐Ÿš€

โฐ For #screenwriters & #filmmakers, this is a wake-up call

๐Ÿงต
๐Ÿ’ฐ๐Ÿ’ป๐Ÿ’ก The industry is eager to save money & resources /w AI efficiency + #GPT4 is leading the charge. I expect production processes to be changed within a year.

The infamous #OpenAI study has shown: Creative tasks are the most exposed to AI automation

The consequences of AI are real, and they're coming to town. Now, how can AI improve the creative process? By helping with structural groundwork, for example:
medium.com/design-bootcamโ€ฆ
#AI #storytelling
Read 6 tweets
Mar 21, 2023
New paper on #GPT4 & "Labor Market Impact" is out.

It's an early evaluation, but creatives should be paying attention to this ... ๐Ÿ‘€

๐Ÿงต 1/5 Image
All the jobs that show 0 exposure to being automated by LLMs are defined by

- blue-collar, manual labor
- outdoor or non-office
- lower educational requirements
- use of specialized tools or equipment

As we'll see: jobs /w higher educational requirements โžก๏ธ high exposure ๐Ÿšจ 2/5 Image
โ˜๏ธIn fact, if your job is defined by

- creative or intellectual labor
- office or indoor work
- high educational requirements

it's much more exposed to being replaced by #GPT (alpha), or #GPT-based apps (beta & theta):
3/5 Image
Read 8 tweets
Mar 21, 2023
AI text-to-video is now available as an open-source model ๐Ÿš€

It's still in its early stages & nowhere near as mature as #Midjourney, but it's definitely worth exploring.

Here's how it works & how you can experiment with it.

๐Ÿงต
#Modelscope #text2video #AIArt #AiArtCommunity
The model is simply called โ€œtext-to-video synthesisโ€. A brief summary:

- 1.7 billion parameters ๐Ÿ”ฅ
- training data includes public datasets like LAION5B (5,85 billion image-text pairs), ImageNet (14 million images) & WebVid (10 million video-caption pairs) ๐ŸŒŽ
- open source ๐Ÿ’ช
Text-to-video synthesis consists of three sub-networks that work together to produce short MP4 video clips:

- a text feature extraction,
- a text feature-to-video diffusion model,
- and a video-to-video diffusion model.

๐Ÿš€๐Ÿš€๐Ÿš€
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(