There are two examples in this demo:
- The most millennial/Californian conversation ever recorded in human history.
- Facebook Reality Lab’s demonstration of the most lifelike avatars in VR to date.
Let’s look more into the Avatar example:
“It’s a dome fitted with 132 camera lenses and 350 lights, and they’re all aimed at the center, where a subject must be seated before their facial features can be mapped over the course of an hour.”
So essentially we are using photogrammetry workflows to generate a lifelike model of the face. This workflow requires multiple angles using photography, this is then ingested into software that analyzes the imagery from multiple angles and accurately estimates a 3d model of the face. The lights add a second level of accuracy to the model as shadows are cast at various positions, also enabling software to work out the 3d shape as it compares the shadowing of the face. Adding these 2 meshes together and you have a pretty accurate model of a persons face.
Now all we have to do is animate it live.
A second area with even more cameras captures how your body moves, in order to help animate your avatar even more realistically. Similar to the face-tracking technology that has become so popular for the selfie cameras in both Facebook and Snapchat (plus instagram now). These are headset mounted along with eye tracking and head tracking sensors.
With all this you are at the point where essentially you can replicate yourself digitally and accurately.
The process above is pretty time consuming and costly but there is no need for it to be at scale. In the short term I can see the re-emergence of “avatar-booths” in the same way that photo-booths were popular in the past. These would soon be replaced with accurate mobile tools that would likely be phone based tech.
Do we need it?
I see it having a huge impact on remote meetings and training options. It will also have a big impact, I think on those with social issues to interact with people at a safe distance.