B. Song, Y. Zong, K. Li, J. Zhu, J. Shi and L. Zhao, "Cross-Database Micro-Expression Recognition Based on a Dual-Stream Convolutional Neural Network," in IEEE Access, vol. 10, pp. 66227-66237, 2022, doi: 10.1109/ACCESS.2022.3185132.
Cross-database micro-expression recognition based on a dual-stream convolutional neural network
|Author:||Song, Baolin1; Zong, Yuan2; Li, Ke1;|
1Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education, School of Information Science and Engineering, Southeast University, Nanjing, China
2Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
3Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, Finland
|Online Access:||PDF Full Text (PDF, 2.3 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2023060252038
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2023-06-02
Cross-database micro-expression recognition (CDMER) is a difficult task, where the target (testing) and source (training) samples come from different micro-expression (ME) databases, resulting in the inconsistency of the feature distributions between each other, and hence affecting the performance of many existing MER methods. To address this problem, we propose a dual-stream convolutional neural network (DSCNN) for dealing with CDMER tasks. In the DSCNN, two stream branches are designed to study temporal and facial region cues in ME samples with the goal of recognizing MEs. In addition, in the training process, the domain discrepancy loss is used to enforce the target and source samples to have similar feature distributions in some layers of the DSCNN. Extensive CDMER experiments are conducted to evaluate the DSCNN. The results show that our proposed DSCNN model achieves a higher recognition accuracy when compared with some representative CDMER methods.
|Pages:||66227 - 66237|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
213 Electronic, automation and communications engineering, electronics
This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB1305200; in partby the National Natural Science Foundation of China under Grant 61921004, Grant 61902064, Grant 61572009, Grant 61906094,Grant 61673108, Grant 61571106, and Grant 61703201; in part by the Jiangsu Provincial Key Research and Development Program underGrant BE2016616; in part by the Natural Science Foundation of Jiangsu Province under Grant BK20170765; and in part by theFundamental Research Funds for the Central Universities under Grant 2242018K3DN01 and Grant 2242019K40047.
© The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.