Current advances in human head modeling allow to generate plausible-looking 3D head models via neural representations, such as NeRFs and SDFs. Nevertheless, constructing complete high-fidelity head models with explicitly controlled animation remains an issue. Furthermore, completing the head geometry based on a partial observation, e.g. coming from a depth sensor, while preserving a high level of detail is often problematic for the existing methods.
We introduce a generative model for detailed 3D head meshes on top of an articulated 3DMM which allows explicit animation and high-detail preservation at the same time. Our method is trained in two stages. First, we register a parametric head model with vertex displacements to each mesh of the recently introduced NPHM dataset of accurate 3D head scans. The estimated displacements are baked into a hand-crafted UV layout. Second, we train a StyleGAN model in order to generalize over the UV maps of displacements, which we later refer to HeadCraft. The decomposition of the parametric model and high-quality vertex displacements allows us to animate the model and modify the regions semantically. We demonstrate the results of unconditional generation and fitting to the full or partial observation.
Here is an interactive viewer allowing for latent interpolation. Generated displacements are applied to the same FLAME template here. Drag the blue cursor around to linearly interpolate between four different latents. The resulting geometry from three views is displayed on the right.
For more work on similar tasks, please check out:
@article{sevastopolsky2023headcraft,
title={HeadCraft: Modeling High-Detail Shape Variations for Animated 3DMMs},
author={Sevastopolsky, Artem and Grassal, Philip-William and Giebenhain, Simon and Athar, Shah{R}ukh and Verdoliva, Luisa and Nie{\ss}ner, Matthias},
publisher={arXiv},
primaryClass={cs.CV},
year={2023}
}