Trippy.

Monkey See

Researchers from the National University of Singapore and The Chinese University of Hong Kong claim to have created an AI that can reconstruct "high-quality" video from brain signals.

As the researchers explain in a yet-to-be-peer-reviewed paper, the AI model dubbed MinD-Video is "co-trained" on publicly available data from fMRI readings — specifically, data taken from instances where an individual was shown a video while their brain activity was being recorded — and an augmented model of the AI image generator Stable Diffusion.

Using this "two-module pipeline designed to bridge the gap between image and video brain decoding," they were able to generate "high-quality," AI-generated reconstructions of the videos, which were originally shown to the participants, purely based on their brain readings.

According to the researchers, their model was able to reconstruct these videos with an average accuracy of 85 percent, based on "various semantic and pixel-level metrics."

"Understanding the information hidden within our complex brain activities is a big puzzle in cognitive neuroscience," the paper reads. "We show that high-quality videos of arbitrary frame rates can be reconstructed with Mind-Video using adversarial guidance."

Credit: Chen et al.

Input Output

The new paper builds on the researchers' previous efforts of using AI to recreate images by analyzing only brain waves.

The AI's new video renderings, on the whole, are pretty impressive, as demonstrated in direct side-by-side comparisons of the original and "reconstructed" videos on the researchers' website.

For instance, a video of a crowd of people walking down a busy street translated to an equally crowded scene, albeit with more vivid colors. An underwater scene of colorful fish turned into an even more vibrant underwater scene.

Credit: Chen et al.

But the effect is far from perfect. For instance, a video of a jellyfish was inexplicably transformed into a clip of a fish swimming, while a video of a sea turtle was reinterpreted as footage of a fish.

Credit: Chen et al.

Brain-Reading Helmet

The researchers argue these AI generations can offer neurological insights as well, for example showing the dominance of the visual cortex in the process of visual perception.

Though this research is fascinating, we're still far from a future in which we're able to strap on a helmet and get a perfectly accurate, AI-generated video stream of whatever's floating around our cranium.

And frankly, that's probably a good thing, given the data privacy implications.

More on AI: AI Expert Says ChatGPT Is Way Stupider than People Realize


Share This Article