University of Oulu

Z. Yu, Y. Shen, J. Shi, H. Zhao, P. Torr and G. Zhao, "PhysFormer: Facial Video-based Physiological Measurement with Temporal Difference Transformer," 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 4176-4186, doi: 10.1109/CVPR52688.2022.00415

PhysFormer : facial video-based physiological measurement with temporal difference transformer

Saved in:
Author: Yu, Zitong1; Shen, Yuming2; Shi, Jingang3;
Organizations: 1CMVS, University of Oulu, Finland
2TVG, University of Oxford
3Xi'an Jiaotong University
4The University of Hong Kong
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 1.5 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2023032332960
Language: English
Published: Institute of Electrical and Electronics Engineers, 2022
Publish Date: 2023-03-23
Description:

Abstract

Remote photoplethysmography (rPPG), which aims at measuring heart activities and physiological signals from facial video without any contact, has great potential in many applications. Recent deep learning approaches focus on mining subtle rPPG clues using convolutional neural networks with limited spatio-temporal receptive fields, which neglect the long-range spatio-temporal perception and interaction for rPPG modeling. In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture, to adaptively aggregate both local and global spatio-temporal features for rPPG representation enhancement. As key modules in PhysFormer, the temporal difference transformers first enhance the quasi-periodic rPPG features with temporal difference guided global attention, and then refine the local spatio-temporal representation against interference. Furthermore, we also propose the label distribution learning and a curriculum learning inspired dynamic constraint in frequency domain, which provide elaborate supervisions for PhysFormer and alleviate overfitting. Comprehensive experiments are performed on four benchmark datasets to show our superior performance on both intra- and cross-dataset testings. One highlight is that, unlike most transformer networks needed pretraining from large-scale datasets, the proposed PhysFormer can be easily trained from scratch on rPPG datasets, which makes it promising as a novel transformer baseline for the rPPG community. The codes are available at https://github.com/ZitongYu/PhysFormer.

see all

Series: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN: 1063-6919
ISSN-E: 2575-7075
ISSN-L: 1063-6919
ISBN: 978-1-6654-6946-3
ISBN Print: 978-1-6654-6947-0
Pages: 4176 - 4186
DOI: 10.1109/cvpr52688.2022.00415
OADOI: https://oadoi.org/10.1109/cvpr52688.2022.00415
Host publication: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported by the Academy of Finland for Academy Professor project EmotionAI (grants 336116, 345122), and ICT 2023 project (grant 328115), by Ministry of Education and Culture of Finland for AI forum project, the National Natural Science Foundation of China (Grant No. 62002283), the EPSRC grant: Turing AI Fellowship: EP/W002981/1, EPSRC/MURI grant EP/N019474/1.
Academy of Finland Grant Number: 336116
345122
328115
Detailed Information: 336116 (Academy of Finland Funding decision)
345122 (Academy of Finland Funding decision)
328115 (Academy of Finland Funding decision)
Copyright information: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.