Cross-database micro-expression recognition based on a dual-stream convolutional neural network |
|
Author: | Song, Baolin1; Zong, Yuan2; Li, Ke1; |
Organizations: |
1Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education, School of Information Science and Engineering, Southeast University, Nanjing, China 2Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China 3Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, Finland |
Format: | article |
Version: | published version |
Access: | open |
Online Access: | PDF Full Text (PDF, 2.3 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe2023060252038 |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers,
2022
|
Publish Date: | 2023-06-02 |
Description: |
AbstractCross-database micro-expression recognition (CDMER) is a difficult task, where the target (testing) and source (training) samples come from different micro-expression (ME) databases, resulting in the inconsistency of the feature distributions between each other, and hence affecting the performance of many existing MER methods. To address this problem, we propose a dual-stream convolutional neural network (DSCNN) for dealing with CDMER tasks. In the DSCNN, two stream branches are designed to study temporal and facial region cues in ME samples with the goal of recognizing MEs. In addition, in the training process, the domain discrepancy loss is used to enforce the target and source samples to have similar feature distributions in some layers of the DSCNN. Extensive CDMER experiments are conducted to evaluate the DSCNN. The results show that our proposed DSCNN model achieves a higher recognition accuracy when compared with some representative CDMER methods. see all
|
Series: |
IEEE access |
ISSN: | 2169-3536 |
ISSN-E: | 2169-3536 |
ISSN-L: | 2169-3536 |
Volume: | 10 |
Pages: | 66227 - 66237 |
DOI: | 10.1109/access.2022.3185132 |
OADOI: | https://oadoi.org/10.1109/access.2022.3185132 |
Type of Publication: |
A1 Journal article – refereed |
Field of Science: |
113 Computer and information sciences 213 Electronic, automation and communications engineering, electronics |
Subjects: | |
Funding: |
This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB1305200; in partby the National Natural Science Foundation of China under Grant 61921004, Grant 61902064, Grant 61572009, Grant 61906094,Grant 61673108, Grant 61571106, and Grant 61703201; in part by the Jiangsu Provincial Key Research and Development Program underGrant BE2016616; in part by the Natural Science Foundation of Jiangsu Province under Grant BK20170765; and in part by theFundamental Research Funds for the Central Universities under Grant 2242018K3DN01 and Grant 2242019K40047. |
Copyright information: |
© The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/. |
https://creativecommons.org/licenses/by/4.0/ |