- The system tracks the motion of a person’s mouth using a 3-D camera attached to the headset with a short boom. Movements of the upper part of the face are measured using strain gauges added to the foam padding that fits the headset to the face. After the two data sources are combined, an accurate 3-D representation of the user’s facial movements can be used to animate a virtual character.
- For now, that software requires you to go through a brief calibration process the first time you use the system. First you give your face muscles a 10-second workout by contorting them into a few different expressions while wearing a headset with the display part removed in front of a 3-D camera, so it can get a full view of the face.
- The team is also working on other techniques that could make it easier to copy your real self into a virtual world. Many tools have already been developed to create 3-D replicas of people’s bodies and faces using conventional and 3-D cameras. Li recently made a system that tackles the more challenging task of making a realistic 3-D re-creation of a person’s hairstyle.
Share This Article