OpenAI rival releases first 'consistent' video-generating AI.

Runway says its Gen-4 can create consistent scenes and people, something previous AI models have failed to do.

"AI-generated videos can struggle to maintain consistency in storytelling," Runway wrote on X on April 1. "By using visual references combined with instructional commands, Gen-4 allows users to create images and videos with a consistent style, theme, location, continuity, and narrative control."

Some footage and short films created by Runway Gen-4.

Gen-4 can create precise characters and locations, then stitch together footage, recreating elements from multiple angles and positions as desired by the user. The result is a seamless scene that “preserves the style, mood, and cinematic elements of each frame.”

Gen-4 is now available to paid users and businesses. To use it, they access Runway’s tool, create the initial content with a command or reference photo, and then describe the desired layout.

Runway did not detail the training process for Gen-4, but posted a series of 60-100 second AI-generated videos in a variety of genres, from live people to animation. For example, the video shows a woman maintaining her appearance in different shots, settings, and lighting conditions. According to The Verge, the clips are “much more consistent and seamless than current AI video generators” like OpenAI’s Sora.

Gen-4 was introduced by Runway a year after Gen-3 Alpha, which allowed users to create videos longer than a minute. Gen-3 Alpha was controversial because it was said to have been trained on thousands of videos taken from YouTube and film archives without permission.

Runway is a famous AI startup founded in 2018, providing tools to help users edit videos quickly, such as removing backgrounds or adding effects. The company also created AI effects for the Oscar-winning film Everything Everywhere All At Once.

Post a Comment

Previous Post Next Post