University of Oulu

G. G. Chrysos and S. Zafeiriou, "PD2T: Person-Specific Detection, Deformable Tracking," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 11, pp. 2555-2568, 1 Nov. 2018. doi: 10.1109/TPAMI.2017.2769654

PD2T : person-specific detection, deformable tracking

Saved in:
Author: Chrysos, Grigorios G.1; Zafeiriou, Stefanos1
Organizations: 1Department of Computing, Imperial College London, London SW7 2AZ, United Kingdom
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 8.2 MB)
Persistent link:
Language: English
Published: Institute of Electrical and Electronics Engineers, 2018
Publish Date: 2019-02-27


Face detection/alignment methods have reached a satisfactory state in static images captured under arbitrary conditions. Such methods typically perform (joint) fitting for each frame and are used in commercial applications; however in the majority of the real-world scenarios the dynamic scenes are of interest. We argue that generic fitting per frame is suboptimal (it discards the informative correlation of sequential frames) and propose to learn person-specific statistics from the video to improve the generic results. To that end, we introduce a meticulously studied pipeline, which we name PD 2 T, that performs person-specific detection and landmark localisation. We carry out extensive experimentation with a diverse set of i) generic fitting results, ii) different objects (human faces, animal faces) that illustrate the powerful properties of our proposed pipeline and experimentally verify that PD 2 T outperforms all the compared methods.

see all

Series: IEEE transactions on pattern analysis and machine intelligence
ISSN: 0162-8828
ISSN-E: 2160-9292
ISSN-L: 0162-8828
Volume: 40
Issue: 11
Pages: 2555 - 2568
DOI: 10.1109/TPAMI.2017.2769654
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Funding: We would like to thank Epameinondas Antonakos and Patrick Snape for our fruitful conversations and their contributions in the preliminary version of this work. The work of Stefanos Zafeiriou has been partially funded by the FiDiPro program of Tekes (project number: 1849/31/2015), as well as the EPSRC project EP/N007743/1 (FACER2VM). The work of Grigorios Chrysos has been funded by a) the EPSRC project EP/L026813/1 Adaptive Facial Deformable Models for Tracking (ADAManT), as well as b) an Imperial College DTA.
Copyright information: © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.