Tech News

Runway Act-One With AI-Powered Facial Expression Capture Capability Added to Gen-3 Alpha Model

Runway AI, an artificial intelligence (AI) firm focusing on video generation models, announced a new feature on Tuesday. Dubbed Act-One, the new capability is available within the company’s latest Gen-3 Alpha large language model (LLM) and is said to accurately capture facial expressions from a source video and then reproduce them on an AI-generated character in a video. The feature solves a significant pain point in AI video generation technology which is converting real people into AI characters while not losing out on realistic expressions.

Runway Act-One Capability in Gen-3 Alpha Introduced

In a blog post, the AI firm detailed the new video generation capability. Runway stated that the Act-One tool can create live-action and animated content using video and voice performances as inputs. The tool is aimed at offering expressive character performance in AI-generated videos.

AI-generated videos have changed the video content creation process significantly as individuals can now generate specific videos using text prompts in natural language. However, there are certain limitations that have prevented the adaptation of this technology. One such limitation is the lack of controls to change the expressions of a character in a video or to improve their performance in terms of delivery of a sentence, gestures, and eye movement.

However, with Act-One, Runway is trying to bridge that gap. The tool, which only works with the Gen-3 Alpha model, simplifies the facial animation process, which can often be complex and require multi-step workflows. Today, animating such characters requires recording videos of an individual from multiple angles, manual face rigging, and capturing their facial motion separately.

Runway claims Act-One replaces the workflow and turns it into a two-step process. Users can now record a video of themselves or an actor from a single-point camera, which can also be a smartphone, and select an AI character. Once done, the tool is claimed to faithfully capture not only facial expressions but also minor details such as eye movements, micro-expressions as well as the style of delivery.

Highlighting the scope of this feature, the company stated in the blog post, “The model preserves realistic facial expressions and accurately translates performances into characters with proportions different from the original source video. This versatility opens up new possibilities for inventive character design and animation.”

Notably, while Act-One can be used for animated characters, it can also be used for live-action characters in a cinematic sequence. Further, the tool can also capture details even if the angle of the actor’s face is different from the angle of the AI character’s face.

This feature is currently being rolled out to all users gradually, however, since it only works with Gen-3 Alpha, those on the free tier will get a limited number of tokens to generate videos with this tool.




Source link

KSR

Hi there! I am the Founder of Cyber World Technologies. My skills include Android, Firebase, Python, PHP, and a lot more. If you have a project that you'd like me to work on, please let me know: contact@cyberworldtechnologies.co.in

Related Articles

Back to top button