SG-DSN : a Semantic Graph-based Dual-Stream Network for facial expression recognition |
|
Author: | Liu, Yang1,2; Zhang, Xingming1; Zhou, Jinzhao1; |
Organizations: |
1School of Computer Science and Engineering, South China University of Technology, China 2Center for Machine Vision and Signal Analysis, University of Oulu, Finland |
Format: | article |
Version: | accepted version |
Access: | open |
Online Access: | PDF Full Text (PDF, 3.4 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe2022020918260 |
Language: | English |
Published: |
Elsevier,
2021
|
Publish Date: | 2023-07-07 |
Description: |
AbstractFacial expression recognition (FER) is a crucial task for human emotion analysis and has attracted wide interest in the field of computer vision and affective computing. General convolutional-based FER methods rely on the powerful pattern abstraction of deep models, but they lack the ability to use semantic information behind significant facial areas in physiological anatomy and cognitive neurology. In this work, we propose a novel approach for expression feature learning called Semantic Graph-based Dual-Stream Network (SG-DSN), which designs a graph representation to model key appearance and geometric facial changes as well as their semantic relationships. A dual-stream network (DSN) with stacked graph convolutional attention blocks (GCABs) is introduced to automatically learn discriminative features from the organized graph representation and finally predict expressions. Experiments on three lab-controlled datasets and two in-the-wild datasets demonstrate that the proposed SG-DSN achieves competitive performance compared with several latest methods. see all
|
Series: |
Neurocomputing |
ISSN: | 0925-2312 |
ISSN-E: | 1872-8286 |
ISSN-L: | 0925-2312 |
Volume: | 462 |
Pages: | 320 - 330 |
DOI: | 10.1016/j.neucom.2021.07.017 |
OADOI: | https://oadoi.org/10.1016/j.neucom.2021.07.017 |
Type of Publication: |
A1 Journal article – refereed |
Field of Science: |
113 Computer and information sciences |
Subjects: | |
Funding: |
Funding: This work was supported by the China Scholarship Council (CSC, No. 202006150091). |
Copyright information: |
© 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http:/creativecommons.org/licenses/by-nc-nd/4.0/ |
https://creativecommons.org/licenses/by-nc-nd/4.0/ |