Framer 🇱🇹 Profile picture
Aug 3, 2024 16 tweets 6 min read Read on X
“Pippen and The Magic Lamp” 🫖

I created this fun Claymation short in just 7 hours.

Surprisingly, it was much easier than I expected.

Here’s what I did 🫳
Context:

Everything I have ever created was mainly one-scene animations. It’s fun to make, and you guys love it. However, it has become my comfort zone.

So, I wanted to create something that would address the two main AI filmmaking problems: consistency and motion control.
Have I used any image-to-video tools like Luma or Gen-3?

Nope. You don’t have motion control there yet.

That is why I used a combination of 4 simple AI tools

As with all my other workflows, this one is also beginner-friendly. There was no Blender, ComfyUI, or SD used. Image
I started by generating the Midjourney picture below.

To have full control over the character, I needed to get him into a T-pose first.

So, I cut him out and used Midjourney’s /describe function to generate a similar character. Image
Then, I uploaded characters pic to MeshyAI and generated a free 3D model in just a few minutes.

Downloaded it as an .fbx file.
After that, I uploaded the model to Mixamo, a free 3D character animation library

Selected a “Stand Up” animation.

I chose an angle that would match my background scene and simply screen-recorded the animation video mixamo.com
To improve the quality of the character and give it a clay style, I used a video-to-video tool called @GoEnhance

They have an amazing Claymation filter.
Then, removed the background using Runway’s Remove Background tool

Placed the character on top of my Midjourney picture (removed the OG static character with Photoshop’s Generative Fill).
I simply repeated this workflow for all the scenes.

However, you might be wondering how I created scenes #3 and #4, where the camera angles are ground-level and over-the-shoulder?
For the character, I just changed the camera angle in Mixamo while keeping the same animation I used in the previous scene (“Looking Down”).
For the BG, I selected pictures that would logically represent the environment from these camera angles.

Then, I used Magnific’s Style Transfer tool.

It took some time to find the right reference pictures and settings, but if you love the process, it’s not that hard, right?:) Image
I used After Effects to assemble everything, but you can easily achieve the same results with Filmora (but not CapCut, bc you cant draw a mask there lol). Image
Mind that character, environment, and style consistency are not enough - the brightness and contrast of the assets need to match as well.

For this, I used the Hue/Saturation tool in After Effects.

Made small adjustments to most of the assets to ensure they blended with BG
For assets like the bird, sheep, and poof animation, I used green screens from YouTube and transformed them into a clay style using GoEnhance.
Finally, I downloaded music and sound effects (also YouTube).

For the characters' murmuring sounds, I simply recorded myself.:))

It took about 5 minutes but added a lot of life to the animation overall. Image
This is it. It’s not rocket science, and you don’t need complex tools to achieve it.

I think, in the future, people consuming video content won’t care whether its AI or not.

What will matter most is the story quality and scene consistency.

Do you agree? Share your thoughts!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Framer 🇱🇹

Framer 🇱🇹 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @0xFramer

Jul 22
Hailuo AI is the ultimate tool for generating intense anime fight scenes 🥷

10 wild examples + tutorial + prompts 👇
First lets see of what this @Hailuo_AI model is really capable of — tutorial’s waiting at the end.

2)

Anime-style shot: two warriors collide in the center of the arena — fists fly, kicks land, the ground cracks beneath them. The camera shakes with every impact as the fight explodes in a blur of motion. Shockwaves ripple through the air, debris scatters, and the arena trembles under the force of their blows. No pauses, no words — just relentless, high-speed combat from every angle.
3)

Anime-style shot: two warriors collide in the center of the arena — fists fly, kicks land, the ground cracks beneath them. The camera shakes with every impact as the fight explodes in a blur of motion. Shockwaves ripple through the air, debris scatters, and the arena trembles under the force of their blows. No pauses, no words — just relentless, high-speed combat from every angle.
Read 14 tweets
Jul 2
This is the best workflow to generate POV images.

And it’s super simple.

Here’s how 👇 Image
Go to and sign in.

Open canva window, then drag and drop your image. morphic.com
Under the image type this prompt:

Change the camera angle of input image to first-person view, as if the character is seeing a [WHAT HE IS LOOKING AT?]. Maintain the animated style, lighting, and overall feel of input image. This should be a direct continuation of the same world, from the character’s eyes. Output image must have no main character visible and no same details from the input image.
Read 4 tweets
Jun 25
"Wrapped Love" 🇫🇷🥖

A short AI cartoon created with Midjourney Video and Flux Kontext.

Time spent: 5 hours 13 mins

Tutorial below 👇
1. As always let's start with source image.

Source image sets the:
a) style
b) environment
c) a character/s of your cartoon.

It’s also an image you feed into the AI to generate all the other shots for your story.

Generate it with AI or grab one from MJ Explore page. Image
2. Using source image, generate different scenes for your cartoon.

These scenes can be:

1.New camera angles of your input image

2.Jump cuts, where you place the character in a different environment (e.g., from train station to inside the train)

To do that, use the Runway References tool (as shown in the tutorial) or the Flux Kontext model — available on Morphic Studios, LTX Studio, or FreepikImage
Read 8 tweets
Jun 19
I ran 100+ anime images through Midjourney Video.

The results? Pure magic.

10 examples 👇
2) Two futuristic sand bikes race across the dunes, each carrying two people dressed in sleek white-and-purple uniforms.
3) Japanese man eating sushi out of the bowl
Read 11 tweets
Jun 4
Create AI anime sketch in 6 hours or less ⏰

That's how my UPDATED workflow looks like:
1. Come up with a “Source image”

The source image is what you feed into the AI to generate all the other shots for your story.

There are three main ways to create a source image:

1.Find one on Midjourney’s Explore page
2. Generate it yourself
3. Take a realistic photo and restyle it with ChatGPT

In this example, an image from Midjourney was taken and recreated using the Omni-reference feature.

But you don’t need to get that technical - just find a picture you like and go with it.Image
2. Use the source image to generate various camera angles

I got tired of doing this in ChatGPT, so I switched to Runway’s References tool — way better.

Here’s the workflow that works best for me:

Upload your reference image, specify the desired camera angle, and list a few key objects that the new image must include.

Here’s an example that worked perfectly:Image
Read 7 tweets
May 15
Finally, everyone can create Claymation Style cartoons! 🥳

Just a few months ago, it was almost impossible.

Now you can make one in under a day!

Here's how:
Less than a week ago, while playing with Runway’s Gen-4 model, I noticed that it can animate not only anime-style images but claymation-style images too.

Unlike most img-to-video tools, Gen-4 seems to recognize and adapt to the visual style of the input, producing motion that closely resembles stop-motion animation. Instead of generating smooth 24–30 frames per second, it often outputs motion at around 8–12 fps

This now allows creators to produce not only 3D Pixar-style or anime content, but also claymation (or stop-motion-inspired) animations—a visually unique and increasingly popular style.

OK, let's go 👇Image
1. First, prepare a storyboard template.

By arranging static images in the correct order, we can visualize the flow of the story before production begins.

This also helps ensure that our shots are consistent in style/characters.

I use whiteboard Canva.comImage
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(