University of Oulu

X. Li et al., "4DME: A Spontaneous 4D Micro-Expression Dataset With Multimodalities," in IEEE Transactions on Affective Computing, 2022, doi: 10.1109/TAFFC.2022.3182342

4DME : a spontaneous 4D micro-expression dataset with multimodalities

Saved in:
Author: Li, Xiaobai1; Cheng, Shiyang2; Li, Yante1;
Organizations: 1Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland
2Intelligent Behaviour Understanding Group (ibug), Imperial College London, UK
Format: article
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 5.4 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe20230913123982
Language: English
Published: Institute of Electrical and Electronics Engineers, 2022
Publish Date: 2023-09-13
Description:

Abstract

Micro-expressions (ME) are a special form of facial expressions which may occur when people try to hide their true feelings for some reasons. MEs are important clues to reveal people’s true feelings, but are difficult or impossible to be captured by ordinary persons with naked-eyes as they are very short and subtle. It is expected that robust computer vision methods can be developed to automatically analyze MEs which requires lots of ME data. The current ME datasets are insufficient, and mostly contain only one single form of 2D color videos. Researches on 4D data of ordinary facial expressions have prospered, but so far no 4D data is available in ME study. In the current study, we introduce the 4DME dataset: a new spontaneous ME dataset which includes 4D data along with three other video modalities. Both micro- and macro-expression clips are labeled out in 4DME, and 22 AU labels and five categories of emotion labels are annotated. Experiments are carried out using three 2D-based methods and one 4D-based method to provide baseline results. The results indicate that the 4D data can potentially benefit ME recognition. The 4DME dataset could be used for developing 4D-based approaches, or exploring fusion of multiple video sources (e.g., texture and depth) for the task of ME analysis in future. Besides, we also emphasize the importance of forming a clear and unified criteria of ME annotation for future ME data collection studies. Several key questions related with ME annotation are listed and discussed in depth, especially about the relationship between AUs and ME emotion categories. A preliminary AU-Emo mapping table is proposed with justified explanations and supportive experimental results. Several unsolved issues are also summarized for future work.

see all

Series: IEEE transactions on affective computing
ISSN: 2371-9850
ISSN-E: 1949-3045
ISSN-L: 2371-9850
Issue: Online first
DOI: 10.1109/taffc.2022.3182342
OADOI: https://oadoi.org/10.1109/taffc.2022.3182342
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
4D
Funding: This work was supported by the Academy of Finland (Postdoc Project 6+E with grant number 323287, Academy Professor project EmotionAI with grant numbers 336116, 345122) and Infotech Oulu.
Academy of Finland Grant Number: 323287
336116
345122
Detailed Information: 323287 (Academy of Finland Funding decision)
336116 (Academy of Finland Funding decision)
345122 (Academy of Finland Funding decision)
Copyright information: © The Author(s) 2022. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
  https://creativecommons.org/licenses/by/4.0/