N. Xue, J. Deng, S. Cheng, Y. Panagakis and S. Zafeiriou, "Side Information for Face Completion: A Robust PCA Approach," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 10, pp. 2349-2364, 1 Oct. 2019, doi: 10.1109/TPAMI.2019.2902556
Side information for face completion : a robust PCA approach
|Author:||Xue, Niannan1; Deng, Jiankang1,2; Cheng, Shiyang1;|
1Department of Computing, Imperial College London, Kensington, London SW7 2AZ, United Kingdom
2Facesoft, LondonW12 OBZ, United Kingdom
3Center for Machine Vision and Signal Analysis, University of Oulu, Oulu 90014, Finland
4Facesoft, London W12 OBZ, United Kingdom
|Online Access:||PDF Full Text (PDF, 6 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2020051229393
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2020-05-12
Robust principal component analysis (RPCA) is a powerful method for learning low-rank feature representation of various visual data. However, for certain types as well as significant amount of error corruption, it fails to yield satisfactory results; a drawback that can be alleviated by exploiting domain-dependent prior knowledge or information. In this paper, we propose two models for the RPCA that take into account such side information, even in the presence of missing values. We apply this framework to the task of UV completion which is widely used in pose-invariant face recognition. Moreover, we construct a generative adversarial network (GAN) to extract side information as well as subspaces. These subspaces not only assist in the recovery but also speed up the process in case of large-scale data. We quantitatively and qualitatively evaluate the proposed approaches through both synthetic data and eight real-world datasets to verify their effectiveness.
IEEE transactions on pattern analysis and machine intelligence
|Pages:||2349 - 2364|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
213 Electronic, automation and communications engineering, electronics
This work was partially funded by the EPSRC project EP/N007743/1 (FACER2VM: Face Matching for Automatic Identity Retrieval, Recognition, Verification and Management), the EPSRC project EP/S010203/1 (DEFORM: Large Scale Shape Analysis of Deformable Models of Humans), the European Community Horizon 2020 [H2020/2014-2020] under grant agreement no. 688520 (TeSLA), and a Google Faculty Fellowship to Dr. Zafeiriou. We thank the NVIDIA Corporation for donating several GPUs used in this work.
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.