Midjourney's current max ratio of "2:1" is actually a proposal for a new universal standard for cinema & digital media by Italian cinematographer Vittorio Storaro.
("2:1" approx. the mathematical average of cinema widescreen "2.2:1" and TV widescreen "16:9")
๐ฅ๐ฌ Note that there's this convention among film people to set "1" as the value for the height of an image. So you will see stuff like "1.85" as aspect ratio, short for "1.85:1"... equivalent to 37:20. ๐คทโโ๏ธ
Some pretty cool widescreen ratios are not supported by #MidjourneyV4 yet, e.g. "2.2:1" cinema widescreen or "2.39:1" Bollywood movie standard etc.
Here's hope it will be added in the future. Meanwhile, V3 does render Bollywood style ๐๐
I am putting together a couple of tutorials about #midjourney & #cinematography over at @Medium, would be happy if you check them out and let me know if they're helpful! ๐
๐ Wide-angle lenses (10mm to 35mm)
๐บ Standard lenses (35mm to 85mm)
๐ Telephoto lenses (85mm to 300mm)
They often correspond to these shots:
Wide angle (wide-angle)
Medium to medium close-up (standard)
Close-up (tele)
Prompting for "close-up", getting medium close-ups ... ๐
We wanted tele, but are probably a bit off toward standard. However, given the depth of field and compression, closer to 80mm than to 50mm. Fair enough.
๐ฐ๐ป๐ก The industry is eager to save money & resources /w AI efficiency + #GPT4 is leading the charge. I expect production processes to be changed within a year.
The infamous #OpenAI study has shown: Creative tasks are the most exposed to AI automation
The consequences of AI are real, and they're coming to town. Now, how can AI improve the creative process? By helping with structural groundwork, for example: medium.com/design-bootcamโฆ #AI#storytelling
The model is simply called โtext-to-video synthesisโ. A brief summary:
- 1.7 billion parameters ๐ฅ
- training data includes public datasets like LAION5B (5,85 billion image-text pairs), ImageNet (14 million images) & WebVid (10 million video-caption pairs) ๐
- open source ๐ช
Text-to-video synthesis consists of three sub-networks that work together to produce short MP4 video clips:
- a text feature extraction,
- a text feature-to-video diffusion model,
- and a video-to-video diffusion model.