Augmented Faces allows your app to automatically identify different regions of a detected face, and use those regions to overlay assets such as textures and models in a way that properly matches the contours and regions of an individual face.
How does Augmented Faces work?
The AugmentedFaces sample app overlays the facial features of a fox onto a user's face using both the assets of a model and a texture.
The 3D model consists of two fox ears and a fox nose. Each is a separate bone that can be moved individually to follow the facial region they are attached to:
The texture consists of eye shadow, freckles, and other coloring:
When you run the sample app, it calls APIs to detect a face and overlays both the texture and the models onto the face.
Identifying an augmented face mesh
In order to properly overlay textures and 3D models on a detected face, ARCore provides detected regions and an augmented face mesh. This mesh is a virtual representation of the face, and consists of the vertices, facial regions, and the center of the user's head. Note that the orientation of the mesh is different for Sceneform.
When a user's face is detected by the camera, ARCore performs these steps to generate the augmented face mesh, as well as center and region poses:
It identifies the center pose and a face mesh.
- The center pose, located behind the nose, is the physical center point of the user's head (in other words, inside the skull).
- The face mesh consists of hundreds of vertices that make up the face, and
is defined relative to the center pose.
AugmentedFaceclass uses the face mesh and center pose to identify face region poses on the user's face. These regions are:
- Left forehead (
- Right forehead (
- Tip of the nose (
- Left forehead (
These elements -- the center pose, face mesh, and face region poses -- comprise
the augmented face mesh and are used by
AugmentedFace APIs as positioning
points and regions to place the assets in your app.
Start using Augmented Faces in your own apps. To learn more, see: