RAFaRe: Learning Robust and Accurate Non-parametric 3D Face Reconstruction from Pseudo 2D&3D Pairs
Longwei Guo
Hao Zhu
Yuanxun Lu
Menghua Wu
Xun Cao
Nanjing University, Nanjing, China
 
AAAI 2023(Oral)
[Paper]
[Supp]
[Code]
Our method reconstructs high-fidelity and accurate geometry that is generalized for different races, views, lighting and ages. We recommend watching the supplementary video for more results.

Abstract

We propose a robust and accurate non-parametric method for single-view 3D face reconstruction (SVFR). While tremendous efforts have been devoted to parametric SVFR, a visible gap still lies between the result 3D shape and the ground truth. We believe there are two major obstacles: 1) the representation of the parametric model is limited to a certain face database; 2) 2D images and 3D shapes in the fitted datasets are distinctly misaligned. To resolve these issues, a large-scale pseudo 2D\&3D dataset is created by first rendering the detailed 3D faces, then swapping the face in the wild images with the rendered face. These pseudo 2D\&3D pairs are created from publicly available datasets which eliminate the gaps between 2D and 3D data while covering diverse appearances, poses, scenes, and illumination. We further propose a non-parametric scheme to learn a well-generalized SVFR model from the created dataset, and the proposed hierarchical signed distance function turns out to be effective in predicting middle-scale and small-scale 3D facial geometry. Our model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks and is well generalized to various appearances, poses, expressions, and in-the-wild environments.


Video


Method

Overview of the proposed pseudo 2D&3D pair synthesis pipeline.
Overview of the hierarchical SDF-based network.


Result

Qualitative comparison.


Paper and Supplementary Material

RAFaRe: Learning Robust and Accurate Non-parametric 3D Face Reconstruction from Pseudo 2D&3D Pairs.
In AAAI Conference on Artificial Intelligence (AAAI), 2023.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This work was supported by the National Key R&D Program of China grant 2022YFF0902401, NSFC grant 62025108, 62001213, and gift funding from Huawei Research and Tencent Rhino-Bird Research Program.