Incorporating high-level and low-level cues for pain intensity estimation |
|
Author: | Yang, Ruijing1,2; Hong, Xiaopeng2; Peng, Jinye1; |
Organizations: |
1School of Information Science and Technology, Northwest University, Xi’an, P. R. China 2Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, Finland 3School of Electronics and Information, Northwestern Polytechnical University, Xi’an, P. R. China |
Format: | article |
Version: | accepted version |
Access: | open |
Online Access: | PDF Full Text (PDF, 0.3 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe201902266283 |
Language: | English |
Published: |
IEEE Computer Society,
2018
|
Publish Date: | 2019-02-26 |
Description: |
AbstractPain is a transient physical reaction that exhibits on human faces. Automatic pain intensity estimation is of great importance in clinical and health-care applications. Pain expression is identified by a set of deformations of facial features. Hence, features are essential for pain estimation. In this paper, we propose a novel method that encodes low-level descriptors and powerful high-level deep features by a weighting process, to form an efficient representation of facial images. To obtain a powerful and compact low-level representation, we explore the way of using second-order pooling over the local descriptors. Instead of direct concatenation, we develop an efficient fusion approach that unites the low-level local descriptors and the high-level deep features. To the best of our knowledge, this is the first approach that incorporates the low-level local statistics together with the high-level deep features in pain intensity estimation. Experiments are evaluated on the benchmark databases of pain. The results demonstrate that the proposed low-to-high-level representation outperforms other methods and achieves promising results. see all
|
Series: |
International Conference on Pattern Recognition |
ISSN: | 1051-4651 |
ISSN-L: | 1051-4651 |
ISBN: | 978-1-5386-3788-3 |
ISBN Print: | 978-1-5386-3789-0 |
Pages: | 3495 - 3500 |
DOI: | 10.1109/ICPR.2018.8545244 |
OADOI: | https://oadoi.org/10.1109/ICPR.2018.8545244 |
Host publication: |
2018 24th International Conference on Pattern Recognition (ICPR) |
Conference: |
International Conference on Pattern Recognition |
Type of Publication: |
A4 Article in conference proceedings |
Field of Science: |
213 Electronic, automation and communications engineering, electronics |
Subjects: | |
Funding: |
This work was supported, in part, by the National Natural Science Foundation of China (No. 61772419 & 61572205), Program for Changjiang Scholars and Innovative Research Team in University of Ministry of Education of China (No. IRT 17R87), Special Research Project of Shaanxi Education Department (No. 16JK1774), the National Key Research and Development Program of China (No. 2017YFB0203104) and the Fund for Integration of Cloud Computing and Big Data, Innovation of Science and Education (No. 2017A1950). This works was also partly supported by the Academy of Finland, Infotech Oulu, and Tekes Fidipro Program. Furthermore, we express deep gratitude to the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research. |
Copyright information: |
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |