In an era where dedicated augmented reality (AR) and virtual reality (VR) devices are becoming more commonplace, the ability to create a life-like environment has been one of the main focuses of study. Even with the latest tech, can you imagine being able to reach out and “touch” everything that you see? These researchers from MIT might just be able to make it happen.
According to a recent news release from MIT, its Computer Science and Artificial Intelligence Laboratory (CSAIL) has come up with a technique through which viewers can reach out, touch, and manipulate objects in videos.
Dubbed the Interactive Dynamic Video (IDV), the technology uses traditional cameras and algorithms that detect the smallest vibrations of an object to create video simulations that users can virtually interact with.
“This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space,” says CSAIL PhD student Abe Davis. “By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos.”
The implications for this technique today are huge. With IDV’s capabilities, it can help filmmakers reduce the cost (and labor) of making films. Moreover, with the readily available prediction of how objects will move in real life, determining a building’s structural integrity will be a breeze.
It can also improve AR/VR games, such as Pokémon Go. Imagine MewTwo bouncing off walls or interacting with surrounding obstacles. Check out the company’s video below: