Yante Li, Xiaohua Huang, Guoying Zhao, Micro-expression action unit detection with spatial and channel attention, Neurocomputing, Volume 436, 2021, Pages 221-231, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2021.01.032
Micro-expression action unit detection with spatial and channel attention
|Author:||Li, Yante1; Huang, Xiaohua2; Zhao, Guoying1,3|
1Center for Machine Vision and Signal Analysis, University of Oulu, Finland
2School of Computer Engineering, Nanjing Institute of Technology, China
3School of Information and Technology, NorthwestUniversity, Xi’an, China
|Online Access:||PDF Full Text (PDF, 2.1 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2021051029330
|Publish Date:|| 2021-05-10
Action Unit (AU) detection plays an important role in facial behaviour analysis. In the literature, AU detection has extensive researches in macro-expressions. However, to the best of our knowledge, there is limited research about AU analysis for micro-expressions. In this paper, we focus on AU detection in micro-expressions. Due to the small quantity and low intensity of micro-expression databases, micro-expression AU detection becomes challenging. To alleviate these problems, in this work, we propose a novel micro-expression AU detection method by utilizing self high-order statistics of spatio-wise and channel-wise features which can be considered as spatial and channel attentions, respectively. Through such spatial attention module, we expect to utilize rich relationship information of facial regions to increase the AU detection robustness on limited micro-expression samples. In addition, considering the low intensity of micro-expression AUs, we further propose to explore high-order statistics for better capturing subtle regional changes on face to obtain more discriminative AU features. Intensive experiments show that our proposed approach outperforms the basic framework by 0.0859 on CASME II, 0.0485 on CASME, and 0.0644 on SAMM in terms of the average F1-score.
|Pages:||221 - 231|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
213 Electronic, automation and communications engineering, electronics
This work was supported by Infotech Oulu, National Natural Science Foundation of China (Grant Nos: 61772419, 62076122), the Academy of Finland for project MiGA (grant 316765), ICT 2023 project (grant 328115), the Jiangsu Specially-Appointed Professor Program, the Talent Startup project of NJIT (No. YKJ201982), the Jiangsu joint research project of sino-foreign cooperative education platform and Technology Innovation Project of Nanjing for Oversea Scientist. As well, the authors wish to acknowledge CSC IT Center for Science, Finland, for computational resources.
|Academy of Finland Grant Number:||
316765 (Academy of Finland Funding decision)
328115 (Academy of Finland Funding decision)
© 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).