University of Oulu

J. Booth, E. Antonakos, S. Ploumpis, G. Trigeorgis, Y. Panagakis and S. Zafeiriou, "3D Face Morphable Models "In-the-Wild"," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 5464-5473. doi: 10.1109/CVPR.2017.580

3D face morphable models "in-the-wild"

Saved in:
Author: Booth, James1; Antonakos, Epameinondas2; Ploumpis, Stylianos1;
Organizations: 1Imperial College London, UK
2Amazon, Berlin, Germany
3University of Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 2.7 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2019100330979
Language: English
Published: Institute of Electrical and Electronics Engineers, 2017
Publish Date: 2019-10-03
Description:

Abstract

3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (in-the-wild). In this paper, we propose the first, to the best of our knowledge, in-the-wild 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an in-the-wild texture model. We show that the employment of such an in-the-wild texture model greatly simplifies the fitting procedure, because there is no need to optimise with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard in-the-wild facial databases.

see all

Series: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN: 1063-6919
ISSN-L: 1063-6919
ISBN: 978-1-5386-0458-8
ISBN Print: 978-1-5386-0457-1
Pages: 5464 - 5473
DOI: 10.1109/CVPR.2017.580
OADOI: https://oadoi.org/10.1109/CVPR.2017.580
Host publication: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 21-26 July 2017 Honolulu, Hawaii
Conference: IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Type of Publication: A4 Article in conference proceedings
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: J. Booth and G. Trigeorgis were supported by EPSRC DTA awards at Imperial College London. E. Antonakos and S. Ploumpis were partially funded by the European Community Horizon 2020 [H2020/2014-2020] under grant agreement no. 688520 (TeSLA). S. Zafeiriou was partially funded by EPSRC Project EP/N007743/1 (FACER2VM).
Copyright information: © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.