4DME : a spontaneous 4D micro-expression dataset with multimodalities
|Author:||Li, Xiaobai1; Cheng, Shiyang2; Li, Yante1;|
1Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland
2Intelligent Behaviour Understanding Group (ibug), Imperial College London, UK
|Online Access:||PDF Full Text (PDF, 5.4 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe20230913123982
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2023-09-13
Micro-expressions (ME) are a special form of facial expressions which may occur when people try to hide their true feelings for some reasons. MEs are important clues to reveal people’s true feelings, but are difficult or impossible to be captured by ordinary persons with naked-eyes as they are very short and subtle. It is expected that robust computer vision methods can be developed to automatically analyze MEs which requires lots of ME data. The current ME datasets are insufficient, and mostly contain only one single form of 2D color videos. Researches on 4D data of ordinary facial expressions have prospered, but so far no 4D data is available in ME study. In the current study, we introduce the 4DME dataset: a new spontaneous ME dataset which includes 4D data along with three other video modalities. Both micro- and macro-expression clips are labeled out in 4DME, and 22 AU labels and five categories of emotion labels are annotated. Experiments are carried out using three 2D-based methods and one 4D-based method to provide baseline results. The results indicate that the 4D data can potentially benefit ME recognition. The 4DME dataset could be used for developing 4D-based approaches, or exploring fusion of multiple video sources (e.g., texture and depth) for the task of ME analysis in future. Besides, we also emphasize the importance of forming a clear and unified criteria of ME annotation for future ME data collection studies. Several key questions related with ME annotation are listed and discussed in depth, especially about the relationship between AUs and ME emotion categories. A preliminary AU-Emo mapping table is proposed with justified explanations and supportive experimental results. Several unsolved issues are also summarized for future work.
IEEE transactions on affective computing
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
This work was supported by the Academy of Finland (Postdoc Project 6+E with grant number 323287, Academy Professor project EmotionAI with grant numbers 336116, 345122) and Infotech Oulu.
|Academy of Finland Grant Number:||
323287 (Academy of Finland Funding decision)
336116 (Academy of Finland Funding decision)
345122 (Academy of Finland Funding decision)
© The Author(s) 2022. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.