University of Oulu

K. Songsri-in, G. Trigeorgis and S. Zafeiriou, "Deep and Deformable: Convolutional Mixtures of Deformable Part-Based Models," 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi'an, 2018, pp. 218-225. doi: 10.1109/FG.2018.00040

Deep & deformable : convolutional mixtures of deformable part-based models

Saved in:
Author: Songsri-in, Kritaphat1; Trigeorgis, George1; Zafeiriou, Stefanos1,2
Organizations: 1Department of Computing, Imperial College London, UK
2Center for Machine Vision and Signal Analysis, University of Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 6.1 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe201902276495
Language: English
Published: Institute of Electrical and Electronics Engineers, 2018
Publish Date: 2019-02-27
Description:

Abstract

Deep Convolutional Neural Networks (DCNNs) are currently the method of choice for tasks such that objects and parts detections. Before the advent of DCNNs the method of choice for part detection in a supervised setting (i.e., when part annotations are available) were strongly supervised Deformable Part-based Models (DPMs) on Histograms of Gradients (HoGs) features. Recently, efforts were made to combine the powerful DCNNs features with DPMs which provide an explicit way to model relation between parts. Nevertheless, none of the proposed methodologies provides a unification of DCNNs with strongly supervised DPMs. In this paper, we propose, to the best of our knowledge, the first methodology that jointly trains a strongly supervised DPM and in the same time learns the optimal DCNN features. The proposed methodology not only exploits the relationship between parts but also contains an inherent mechanism for mining of hard-negatives. We demonstrate the power of the proposed approach in facial landmark detection "in-the-wild" where we provide state-of-the-art results for the problem of facial landmark localisation in standard benchmarks such as 300W and 300VW.

see all

ISBN: 978-1-5386-2335-0
ISBN Print: 978-1-5386-2336-7
Pages: 218 - 225
DOI: 10.1109/FG.2018.00040
OADOI: https://oadoi.org/10.1109/FG.2018.00040
Host publication: 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018
Conference: IEEE International Conference on Automatic Face and Gesture Recognition
Type of Publication: A4 Article in conference proceedings
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: K. Songsri-in was supported by Royal Thai Government Scholarship. G. Trigeorgis was supported by EPSRC DTA award at Imperial College London and Google Fellowship. The work of S. Zafeiriou was partially funded by EPSRC Project EP/N007743/1 (FACER2VM).
Copyright information: © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.