A comprehensive performance evaluation of deformable face tracking “In-the-wild”
|Author:||Chrysos, Grigorios G.1; Antonakos, Epameinondas1; Snape, Patrick1;|
1Department of Computing, Imperial College London, 180 Queen’s Gate, London SW7 2AZ, UK
2Seeing Machines Ltd., Level 1, 11 Lonsdale St, Braddon, ACT 2612, Australia
3Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, Finland
|Online Access:||PDF Full Text (PDF, 7 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe201902276417
|Publish Date:|| 2019-02-27
Recently, technologies such as face detection, facial landmark localisation and face recognition and verification have matured enough to provide effective and efficient solutions for imagery captured under arbitrary conditions (referred to as “in-the-wild”). This is partially attributed to the fact that comprehensive “in-the-wild” benchmarks have been developed for face detection, landmark localisation and recognition/verification. A very important technology that has not been thoroughly evaluated yet is deformable face tracking “in-the-wild”. Until now, the performance has mainly been assessed qualitatively by visually assessing the result of a deformable face tracking technology on short videos. In this paper, we perform the first, to the best of our knowledge, thorough evaluation of state-of-the-art deformable face tracking pipelines using the recently introduced 300 VW benchmark. We evaluate many different architectures focusing mainly on the task of on-line deformable face tracking. In particular, we compare the following general strategies: (a) generic face detection plus generic facial landmark localisation, (b) generic model free tracking plus generic facial landmark localisation, as well as (c) hybrid approaches using state-of-the-art face detection, model free tracking and facial landmark localisation technologies. Our evaluation reveals future avenues for further research on the topic.
International journal of computer vision
|Pages:||198 - 232|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
GC was supported by EPSRC DTA award at Imperial College London, as well as from the EPSRC project ADAMANT (EP/L026813/1). The work of PS and EA was funded by the European Community Horizon 2020 [H2020/2014-2020] under Grant Agreement No. 688520 (TeSLA). The work of S. Zafeiriou was funded by the FiDiPro program of Tekes (Project No. 1849/31/2015), as well as from EPSRC Programme Grant FACER2VM (EP/N007743/1).
© The Author(s) 2017. This article is published with open access at Springerlink.com.