Y. Li, W. Peng and G. Zhao, "Micro-expression Action Unit Detection with Dual-view Attentive Similarity-Preserving Knowledge Distillation," 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 2021, pp. 01-08, doi: 10.1109/FG52635.2021.9666975
Micro-expression action unit detection with dual-view attentive similarity-preserving knowledge distillation
|Author:||Li, Yante1; Peng, Wei1; Zhao, Guoying1|
1The Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, Finland
|Online Access:||PDF Full Text (PDF, 1.8 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2022030321799
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2022-03-03
Encoding facial expressions via action units (AUs) has been found to be effective in resolving the ambiguity issue among different expressions. Therefore, AU detection plays an important role for emotion analysis. While a number of AU detection methods have been proposed for common facial expressions, there is very limited study for micro-expression AU detection. Micro-expression AU detection is challenging because of the weakness of micro-expression appearance and the spontaneous characteristic leading to difficult collection, thus has small-scale datasets. In this paper, we focus on the micro-expression AU detection and expect to contribute to the community. To address above issues, a novel dual-view attentive similarity-preserving distillation method is proposed for robust micro-expression AU detection by leveraging massive facial expressions in the wild. Through such an attentive similarity-preserving distillation method, we break the domain shift problem and essential AU knowledge from common facial AUs is efficiently distilled. Furthermore, considering that the generalization ability of teacher network is important for knowledge distillation, a semi-supervised co-training approach is developed to construct a generalized teacher network for learning discriminative AU representation. Extensive experiments have demonstrated that our proposed knowledge distillation method can effectively distill and transfer the cross-domain knowledge for robust micro-expression AU detection.
|Pages:||1 - 8|
2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021)
IEEE International Conference on Automatic Face and Gesture Recognition
|Type of Publication:||
A4 Article in conference proceedings
|Field of Science:||
113 Computer and information sciences
This work was supported by Infotech Oulu, National Natural Science Foundation of China (Grant No: 61772419), Ministry of Education and Culture of Finland for AI forum project, and Academy of Finland for ICT 2023 project (grant 328115).
|Academy of Finland Grant Number:||
328115 (Academy of Finland Funding decision)
© 2021 European Union/IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.