Runway introduced Act-One – a function of generating animated characters using facial expressions and gestures of the user

Runway introduced Act-One – a function of generating animated characters using facial expressions and gestures of the user

The developers of the Runway video creation service introduced the Act-One function. With its help, users can use videos with their facial expressions and gestures to create animated characters.

The Runway blog said that the animation process usually takes a long time. In addition, this requires special motion capture equipment and a lot of manual tracking. Therefore, animation is expensive, and if you need to create several characters, you will have to do a lot of work for each one. The developers of Runway decided to make this process easier and significantly cheaper.

To do this, they developed Act-One – a neural network function that receives a recording of an actor’s facial expressions and gestures as an input in order to create an animated character based on this data. In addition, a single facial expression record can be used to generate multiple characters in different styles. For example, you can create a frame for a cartoon or a movie. At the same time, the project team notes that they have worked on security so that users cannot use Act-One to generate deepfakes.

The company has already started to open access to the Act-One feature. Soon all Runway users will be able to use it.

Related posts