How to Prepare for Gen-1 Access - 5 Tips to hit the ground running

On the GEN-1 Waitlist? I got access & it’s incredible, but there are many things I wish I had known before.

Here are 5 tips so you can start making amazing videos with GEN-1 on day 1. 👇🧵

1/
1: Preparing Videos for GEN-1

There’s currently a 3s limit to video outputs & the input-output results are not 1-1 so you’ll have to do some time-remapping to get the best results.

To test out GEN-1, I’ve been filming short clips & creating edits built from 3s blocks.

2/
This is the flow I use to get the final GEN-1 exports to (almost) match the original timing:

a) create a final edit
b) import into a sequence & cut it into 3s pieces
c) adjust the speed/duration to about 135%
d) export each ~2s piece as an mp4 & name each _1, _2, etc

3/
Once you have access, you will be able to process each file & they will end up being about 3s each!

4/
2: Preparing Longer Videos - Use Overlaps

GEN-1 can only process 3s clips but this does not mean you can’t create longer content from a single take. You’ll just need to export short clips (for now) & stitch them back together.

5/
If u’re making things for fun it may be overkill, but if u want 2 be sure single takes r as seamless as possible, create overlaps 2 easily stitch longer scenes w/out missing frames.

In this case, cut into 2.5s clips & extend each .5s (then adjust speed of all to 135%).

6/
Note: You will need to use the same seed number, image/prompt & other settings to keep the results consistent.

7/
3: Preparing Videos - High Framerate

In most cases, you can use the interpolate setting & the results are solid. But if you are a gluten for punishment & REALLY want to get the smoooothest results, there is a way.

8/
By default, the subsampling is 2 which will drop every other frame. Setting this to 1 means you will not drop frames, but you will also need to do twice as much work.

In this case, cut into 1.5s clips (then adjust the speed to 135%).

9/
4: Prepare Images / Prompts

Other than the source video, the most important thing to create with GEN-1 is your image/prompt. You can use real images, but things really get fun when you start generating images with AI & applying them to your videos.

10/
Start creating a collection of images in styles that you think might work well with your videos.

When creating your images:

Describe what you want the video to visually turn into- what is the mood, style, lighting, texture, colors?

11/
Describe the main objects/characters that are in your video - if you have a person in your video, try making some images with a person in a style you’d like to see. If there is a mountain or a boardwark, try that.

12/
The more details, elements, textures, etc you have, the more GEN-1 has to pull from. You can use this to your advantage - keep the images simple & the resulting video will be too (ex. a b/w sketch) - include lots of details & the results will be more varied.

13/
U'll never get exactly what ur thinking, but once u have access & r able 2 try it out, u'll b pleasantly surprised & will want 2 keep iterating.

One thing I found 2 work well - using 2x2 images returned from MidJourney. This tends 2 give cool variety 2 the results.

14/
5: You got in!!! Things to do on Day 1(Mess with the Parameters)

The email will come, you are almost there… You will need to join their Discord & click a few buttons. Soon after you will get full access.

15/
Like MidJourney, u use GEN-1 via Discord. Add a video + image/prompt & ~30s later = video.

The defaults r pretty good - u can try the same video & image a few times & it'll auto-change the seed each time. A video that was just ok can be amazing w/ a different seed.

16/
To really get the most out of GEN-1, you will want to play with the parameters.

Here are some of the key params to use:

17/
–seed: when you find a seed number you like, you can re-input it to get the same style. You would use this to make further refinements to params, as well as to apply the same style on other video clips in your sequence.

18/
–cfg_scale: adjusts how much the image/prompt is taken into account. Higher numbers like 12 will stick closer to the image/prompt, and lower numbers like 7 will be more ‘creatively interpreted’.

19/
–depth_blur_level: adjusts how much the output stays to the input’s structure. 0 will be close to the video, 5 will end up being more wild (and 7 pretty much ignores the video and makes some crazy sh*t).

20/
–subsampling: the default of 2 will drop every other frame, setting to 1 will keep all the frames. The higher this number, the more total time from a video will b processed (but < frames per second from the orig).

–interpolate: set this to ‘true’ to get a smoother video.

21/
–upscale: once u adjusted everything & you’re happy w/ the results, u can run w/ this param 2 get a significantly upscaled result (+ few more frames)

Overall, u should mess around w/ these settings & once u like the results, upscale to get a higher-quality final video.

22/
When I work, I typically will start with one video, find a seed and settings I like, then use those on the rest of the clips. Most of the time I end up keeping them across the shots, but sometimes i’ll change the seed on specific clips if I like the results better.

23/
Thanks for reading, follow & retweet if you dig it!

I’m going to cover all the parameters in more detail (with examples) in an upcoming thread.



#gen1 #ai #aivideo #midjourney #runway #future #AIart

24/

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Anon A. Mister

Anon A. Mister Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(