[ad_1]
Quest Pro’s face tracking capabilities are quickly used to make meta avatars more expressive, but next-gen avatars will benefit even more from the new technology.
One of the Quest Pro’s big new features is a face tracking system that uses internal cameras to recognize your eyes and some of your facial movements. Combined with a calibration-free machine learning model, the headset takes what it sees and turns it into inputs that can drive any avatar’s animation.
Key Mission Pro Cover:
Quest Pro revealed – full details, price and release date
Quest Pro Hands-on – The Dawn of the Mixed Reality Headset Era
Quest Pro technical analysis – what is promising and what is not
Touch Pro controllers revealed – also compatible with Quest 2
In the near future, this will be used with meta existing avatars. And while it certainly makes them more expressive, they still look a little off.
This is probably the result of the current meta model system not being built with this level of face tracking in mind. The model animation framework ‘Proof’ – doesn’t seem up to the job. On the current system, the Quest Pro’s face tracking isn’t working properly on the inputs it can handle properly.
Fortunately, Meta has built a tech demo of what’s possible with the Quest Pro’s face tracking in mind when designing a prototype (and almost all of it when the headset’s processing power is decided upon).
Yes, it still vibrates a bit, but every movement you see here is driven by the user making the same movement, including blowing their cheeks or moving their mouth from side to side. Overall, I would argue that the face is a more complete representation that avoids entering the uncanny valley.
I was able to test this demo for myself recently with my Quest Pro, where I looked into a mirror and saw a character like this (which he calls Meta Aura). I was amazed that, despite no special adjustments, the face I saw in the mirror seemed to mimic any movement I could think of doing in front of me.
I was especially drawn to the detail in the skin. If I scratched my nose and eyes, I could see the skin around them literally break down. These subtle details, like the mouth moving across the cheek, add so much to the feeling that this is not just something in front of me, but something living behind it.
The expressions may or may not look correct. I When I am the one behind the mask is another question. Since this avatar’s face doesn’t look like my own, it’s hard to tell. But the movements are minimal. Realistically It’s the first important step toward virtual avatars that feel natural and believable.
Meta says it will release Aura Display as an open-source project so developers can see how to integrate face tracking inputs with avatars. The company says developers can use single tools to drive human avatars or non-human avatars such as animals or monsters, without having to adjust each avatar individually.
According to Meta, developers can tap into the Face Tracking API, which uses values ​​similar to FACS, a well-known system that describes the movement of various muscles in the human face.
This is a privacy blocking system that is not only useful for representing faces, but also useful for users. As with Meta, developers can’t actually get raw images of a user’s face. In it, “when you brush your nose or brush your eyebrows, you get a series of zero-to-one values ​​corresponding to the whole set of facial movements,” says Meta. “These gestures make it easier for a developer to preserve the semantics of a player’s initial movements when their character shows something inhuman or more fantastical than the Face Tracking API to their own character tool.”
Meta claims that even the company itself can’t see the images captured by the headset’s cameras, either inside or out. They are created on the headset and deleted immediately, according to the company, without being sent to the cloud or developers.
[ad_2]
Source link