Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Learn more about Act-One below.
(1/7)
Act-One allows you to faithfully capture the essence of an actor's performance and transpose it to your generation. Where traditional pipelines for facial animation involve complex, multi-step workflows, Act-One works with a single driving video that can be shot on something as simple as a cell phone.
(2/7)
Without the need for motion-capture or character rigging, Act-One is able to translate the performance from a single input video across countless different character designs and in many different styles.
(3/7)
One of the models strengths is producing cinematic and realistic outputs across a robust number of camera angles and focal lengths. Allowing you generate emotional performances with previously impossible character depth opening new avenues for creative expression.
(4/7)
A single video of an actor is used to animate a generated character.
(5/7)
With Act-One, eye-lines, micro expressions, pacing and delivery are all faithfully represented in the final generated output.
(6/7)
Access to Act-One will begin gradually rolling out to users today and will soon be available to everyone.
Today we are releasing Gen-3 Alpha Image to Video. This update allows you to use any image as the first frame of your video generation, either on its own or with a text prompt for additional guidance.
Image to Video is major update that greatly improves the artistic control and consistency of your generations. See more below.
Gen-3 Alpha can simulate liquids such as water, paint, oil, honey and molten glass. All with realistic viscosity, physics-based interactivity and caustics.
(1/7)
Prompt: A dynamic motion shot of ethereal underwater caustics dancing across a sandy seabed. Shimmering patterns of light ripple and flow, creating intricate lace-like projections on the ocean floor. The camera slowly pans, following the mesmerizing play of refracted sunlight as it filters through unseen waves above. Tiny particles suspended in the water catch the light, adding depth and dimension to the scene. The caustics shift and morph, their intensity waxing and waning as if affected by gentle currents.
(2/7)
Prompt: A dynamic motion shot of a hyper-realistic ocean simulation confined to a 3D open box floating in darkness. Waves surge and recede, crashing against rocky outcrops with lifelike physics. Foam forms intricate patterns as water swirls and eddies around the stones. The camera slowly pans, capturing the play of light on the water's surface and the depth of the turquoise liquid. The open box emphasize the contrast between the vivid simulation and the surrounding black void.
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training, and represents a significant step towards our goal of building General World Models.
Prompt: Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.
(2/10)
Trained jointly on videos and images, Gen-3 Alpha will power Runway's Text to Video, Image to Video and Text to Image tools, existing control modes such as Motion Brush, Advanced Camera Controls and Director Mode, and upcoming tools to enable even more fine-grained control over structure, style and motion.
Gen-3 Alpha will also be released with a new set of safeguards, including a new and improved in-house visual moderation system and C2PA provenance standards.
Prompt: An astronaut running through an alley in Rio de Janeiro.