Automatic 4D facial expression recognition via collaborative cross-domain dynamic image network |
|
Author: | Behzad, Muzammil1; Vo, Nhat1; Li, Xiaobai1; |
Organizations: |
1Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Oulu, Finland |
Format: | article |
Version: | published version |
Access: | open |
Online Access: | PDF Full Text (PDF, 8.1 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe202002256421 |
Language: | English |
Published: |
British Machine Vision Association Press,
2019
|
Publish Date: | 2020-02-25 |
Description: |
AbstractThis paper proposes a novel 4D Facial Expression Recognition (FER) method using Collaborative Cross-domain Dynamic Image Network (CCDN). Given a 4D data of face scans, we first compute its geometrical images, and then combine their correlated information in the proposed cross-domain image representations. The acquired set is then used to generate cross-domain dynamic images (CDI) via rank pooling that encapsulates facial deformations over time in terms of a single image. For the training phase, these CDIs are fed into an end-to-end deep learning model, and the resultant predictions collaborate over multi-views for performance gain in expression classification. Furthermore, we propose a 4D augmentation scheme that not only expands the training data scale but also introduces significant facial muscle movement patterns to improve the FER performance. Results from extensive experiments on the commonly used BU-4DFE dataset under widely adopted settings show that our proposed method outperforms the state-ofthe- art 4D FER methods by achieving an accuracy of 96:5% indicating its effectiveness. see all
|
Pages: | 1 - 12 |
Host publication: |
The British Machine Vision Conference 2019 (BMVC) 9th-12th September 2019, Cardiff UK |
Conference: |
British Machine Vision Conference |
Type of Publication: |
D3 Professional conference proceedings |
Field of Science: |
113 Computer and information sciences |
Subjects: | |
Copyright information: |
© 2019. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. |