University of Oulu

Y. Zhou, E. Antonakos, J. Alabort-I-Medina, A. Roussos and S. Zafeiriou, "Estimating Correspondences of Deformable Objects “In-the-Wild”," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 5791-5801. doi: 10.1109/CVPR.2016.624

Estimating correspondences of deformable objects “in-the-wild”

Saved in:
Author: Zhou, Yuxiang1; Antonakos, Epameinondas1; Alabort-i-Medina, Joan1;
Organizations: 1Department of Computing, Imperial College London, U.K.
2Center for Machine Vision and Signal Analysis, University of Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 5.7 MB)
Persistent link:
Language: English
Published: Institute of Electrical and Electronics Engineers, 2016
Publish Date: 2019-02-28


During the past few years we have witnessed the development of many methodologies for building and fitting Statistical Deformable Models (SDMs). The construction of accurate SDMs requires careful annotation of images with regards to a consistent set of landmarks. However, the manual annotation of a large amount of images is a tedious, laborious and expensive procedure. Furthermore, for several deformable objects, e.g. human body, it is difficult to define a consistent set of landmarks, and, thus, it becomes impossible to train humans in order to accurately annotate a collection of images. Nevertheless, for the majority of objects, it is possible to extract the shape by object segmentation or even by shape drawing. In this paper, we show for the first time, to the best of our knowledge, that it is possible to construct SDMs by putting object shapes in dense correspondence. Such SDMs can be built with much less effort for a large battery of objects. Additionally, we show that, by sampling the dense model, a part-based SDM can be learned with its parts being in correspondence. We employ our framework to develop SDMs of human arms and legs, which can be used for the segmentation of the outline of the human body, as well as to provide better and more consistent annotations for body joints.

see all

ISBN Print: 978-1-4673-8851-1
Pages: 5791 - 5801
DOI: 10.1109/CVPR.2016.624
Host publication: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Conference: IEEE Conference on Computer Vision and Pattern Recognition
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Funding: The work of E. Antonakos was partially funded by the EPSRC project EP/J017787/1 (4DFAB). The work of J. Alabort-i-Medina was partially funded by an EPSRC DTA. The work of A. Roussos was partially funded by the EPSRC project EP/N007743/1 (FACER2VM). The work of S. Zafeiriou was partially funded by the FiDiPro program of Tekes (project number: 1849/31/2015), as well as by the European Community Horizon 2020 [H2020/2014-2020] under grant agreement no. 688520 (TeSLA).
Copyright information: © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.