Runway Profile picture
Oct 22 7 tweets 3 min read Read on X
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.

Learn more about Act-One below.

(1/7)
Act-One allows you to faithfully capture the essence of an actor's performance and transpose it to your generation. Where traditional pipelines for facial animation involve complex, multi-step workflows, Act-One works with a single driving video that can be shot on something as simple as a cell phone.

(2/7)
Without the need for motion-capture or character rigging, Act-One is able to translate the performance from a single input video across countless different character designs and in many different styles.

(3/7)
One of the models strengths is producing cinematic and realistic outputs across a robust number of camera angles and focal lengths. Allowing you generate emotional performances with previously impossible character depth opening new avenues for creative expression.

(4/7)
A single video of an actor is used to animate a generated character.

(5/7)
With Act-One, eye-lines, micro expressions, pacing and delivery are all faithfully represented in the final generated output.

(6/7)
Access to Act-One will begin gradually rolling out to users today and will soon be available to everyone.

To learn more, visit

(7/7)runwayml.com/research/intro…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Runway

Runway Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @runwayml

Aug 8
Explore adding GVFX to any footage you have. From shot on your phone to high quality cinematic action.

Learn how:

(1/7) academy.runwayml.com/gen3-alpha/gen…
(2/7)
(3/7)
Read 7 tweets
Jul 29
Today we are releasing Gen-3 Alpha Image to Video. This update allows you to use any image as the first frame of your video generation, either on its own or with a text prompt for additional guidance.

Image to Video is major update that greatly improves the artistic control and consistency of your generations. See more below.

(1/10)
2/10
3/10
Read 10 tweets
Jul 12
Gen-3 Alpha can simulate liquids such as water, paint, oil, honey and molten glass. All with realistic viscosity, physics-based interactivity and caustics.

(1/7)
Prompt: A dynamic motion shot of ethereal underwater caustics dancing across a sandy seabed. Shimmering patterns of light ripple and flow, creating intricate lace-like projections on the ocean floor. The camera slowly pans, following the mesmerizing play of refracted sunlight as it filters through unseen waves above. Tiny particles suspended in the water catch the light, adding depth and dimension to the scene. The caustics shift and morph, their intensity waxing and waning as if affected by gentle currents.

(2/7)
Prompt: A dynamic motion shot of a hyper-realistic ocean simulation confined to a 3D open box floating in darkness. Waves surge and recede, crashing against rocky outcrops with lifelike physics. Foam forms intricate patterns as water swirls and eddies around the stones. The camera slowly pans, capturing the play of light on the water's surface and the depth of the turquoise liquid. The open box emphasize the contrast between the vivid simulation and the surrounding black void.

(3/7)
Read 7 tweets
Jun 17
Introducing Gen-3 Alpha: Runway’s new base model for video generation.

Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions.



(1/10) runwayml.com/gen-3-alpha
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training, and represents a significant step towards our goal of building General World Models.

Prompt: Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.

(2/10)
Trained jointly on videos and images, Gen-3 Alpha will power Runway's Text to Video, Image to Video and Text to Image tools, existing control modes such as Motion Brush, Advanced Camera Controls and Director Mode, and upcoming tools to enable even more fine-grained control over structure, style and motion.

Gen-3 Alpha will also be released with a new set of safeguards, including a new and improved in-house visual moderation system and C2PA provenance standards.

Prompt: An astronaut running through an alley in Rio de Janeiro.

(3/10)
Read 10 tweets
Jan 26, 2023
Learn how to turn any video clip into an AI masterpiece with today's Runway Academy.
Step 1: Select you source video then upload it to Runway. Green Screen your subject. Then export as a PNG sequence.
Step 2: Using the first frame from your PNG sequence, head to Image to Image and start to define the style you like.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(