"We have decided not to release the Imagen Video model or its source code until these concerns are mitigated."
Challenge Mode
Google's impressive-looking new video-generating neural network is up and running — but issues with "problematic" content mean that for now, the company is keeping it from public release.
In a paper about its Imagen Video model, Google spends pages waxing prolific about the artificial intelligence's amazing text-to-video-generating capabilities before admitting briefly just that due to "several important safety and ethical challenges," the company isn't releasing it.
As far as what that problematic content looks like more specifically, the company characterizes it as "fake, hateful, explicit or harmful." Translation? It sounds like this naughty AI is capable of spitting out videos that are sexual, violent, racist, or otherwise unbecoming of an image-conscious tech giant.
"While our internal testing suggests much of explicit and violent content can be filtered out, there still exists social biases and stereotypes which are challenging to detect and filter," the company's researchers wrote. "We have decided not to release the Imagen Video model or its source code until these concerns are mitigated."
Ghost in the Machine
Under the "biases and limitations" subheading, the researchers explained that although they tried to train Imagen against "problematic data" in order to teach it how to filter that stuff out, it's not quite there yet.
The admission underscores an intriguing reality in machine learning: it's not uncommon for researchers to build a model that can generate extraordinary results from a model — Imagen really does look very impressive — while struggling to control its potential outputs.
In sum, it sounds a lot like the issues we've seen with other neural networks, from the role-playing "Dungeon Master" AI that people began using to role-play child abuse to the less-severe tendency to create realistic photos of drugs exhibited by the Craiyon image generator, formerly known as DALL-E Mini.
Where Imagen is different, of course, is that it generates video from text, which until very recently wasn't possible.
It's one thing to read or see a still image of gore or porn; it's another entirely to see moving video of it, which makes Google's decision seem pretty astute.
More problematique AI: Walmart App Virtually Tries Clothes on Your Body... If You Strip
Share This Article