"We're really close to having full feature films being generated."
Video to Video
Stable Diffusion is currently getting all the glory for its impressive ability to generate entire — though perhaps not entirely original — images from text prompts. But its cocreator, a startup called Runway, has since split ways and gone on to release a new generative AI called Gen-1 that's capable of transforming videos into almost any visual style a user wants.
And we don't mean just messing with the color grading, folks. We're talking about taking a video of a guy walking down the street and turning it into a full-blown claymation, or even making a shot of a man trudging in snow look like the original footage of the moon landing. The company calls this approach of using a source video to generate a new one "video to video."
"It's the next step forward in generative AI," the company said in a new demo.
Gen-1 currently boasts five different modes. The first, "stylization," will apply the style of either a still image or even a text prompt to your source video. "Storyboard" will turn mockups into stylized animations, like taking books on a tabletop and transforming them into skyscrapers in a night skyline.
The "mask" mode allows you to quickly isolate and transform a subject even if they're moving, and spruce them up however you prompt it to. "Render" turns untextured 3D models into a finished scene, and finally, "customization" simply gives you fine-tuned control over all of the above.
Today, Generative AI takes its next big step forward.
Introducing Gen-1: a new AI model that uses language and images to generate new videos out of existing ones.
Sign up for early research access: https://t.co/7JD5oHrowP pic.twitter.com/4Pv0Sk4exy
— Runway (@runwayml) February 6, 2023
This isn't the first AI capable of generating whole videos or even stylistically modifying them. But, from what we can tell in the demo, it looks noticeably more impressive, and dare we say, transformative (or at least eye-popping) — although we'll have to wait until it's more widely available in the coming weeks before reaching a final verdict.
According to Runway CEO and cofounder Cristóbal Valenzuela, it's "one of the first models" developed with video makers in mind, capable of generating much longer footage. In fact, some of the company's previous iterations of the technology have already been used in movies like last year's hit "Everything Everywhere All at Once."
"It comes with years of insight about how filmmakers and VFX editors actually work on post-production," Valenzuela told MIT Technology Review.
The startup's CEO even ventures that "we're really close to having full feature films being generated."
"We're close to a place where most of the content you'll see online will be generated," he added.
More on generative AI: Here's Why AI Is so Awful at Generating Pictures of Humans Hands