University of Oulu

Yang, R., Guan, Z., Yu, Z., Feng, X., Peng, J., & Zhao, G. (2021). Non-contact Pain Recognition from Video Sequences with Remote Physiological Measurements Prediction. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2021/170

Non-contact pain recognition from video sequences with remote physiological measurements prediction

Saved in:
Author: Yang, Ruijing1; Guan, Ziyu1; Yu, Zitong2;
Organizations: 1Northwest University
2University of Oulu
3Northwestern Polytechnical University
Format: article
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 0.4 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe202201189161
Language: English
Published: International Joint Conferences on Artificial Intelligence, 2021
Publish Date: 2022-01-18
Description:

Abstract

Automatic pain recognition is paramount for medical diagnosis and treatment. The existing works fall into three categories: assessing facial appearance changes, exploiting physiological cues, or fusing them in a multi-modal manner. However, (1) appearance changes are easily affected by subjective factors which impedes objective pain recognition. Besides, the appearance-based approaches ignore long-range spatial-temporal dependencies that are important for modeling expressions over time; (2) the physiological cues are obtained by attaching sensors on human body, which is inconvenient and uncomfortable. In this paper, we present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition. The framework is able to capture both local and long-range dependencies via the proposed attention mechanism for the learned appearance representations, which are further enriched by temporally attended physiological cues (remote photoplethysmography, rPPG) that are recovered from videos in the auxiliary task. This framework is dubbed rPPG-enriched Spatio-Temporal Attention Network (rSTAN) and allows us to establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases. It demonstrates that rPPG predictions can be used as an auxiliary task to facilitate non-contact automatic pain recognition.

see all

ISBN: 978-0-9992411-9-6
Pages: 1231 - 1237
DOI: 10.24963/ijcai.2021/170
OADOI: https://oadoi.org/10.24963/ijcai.2021/170
Host publication: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), 19-27 August, Monteral, Canada
Host publication editor: Zhou, Zhi-Hua
Conference: International joint conference on artificial intelligence
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was partly supported by the Xi’an Key Laboratory of Intelligent Perception and Cultural Inheritance (No. 2019219614SYS011CG033), the National Natural Science Foundation of China (No. 61936006, 61772419 & 6177051263), the Program for Changjiang Scholars and Innovative Research Team in University (No. IRT 17R87), and Natural Science Basic Research Program of Shaanxi (No. 2020JQ-850 & 2019JM-103). This works was also supported by the Academy of Finland for ICT 2023 project (grant 328115), Infotech Oulu, and project MiGA (grant 316765).
Academy of Finland Grant Number: 328115
316765
Detailed Information: 328115 (Academy of Finland Funding decision)
316765 (Academy of Finland Funding decision)
Copyright information: © 2021 IJCAI.org.