University of Oulu

Yuan Zong, Wenming Zheng, Xiaopeng Hong, Chuangao Tang, Zhen Cui, and Guoying Zhao. 2019. Cross-Database Micro-Expression Recognition: A Benchmark. In Proceedings of the 2019 on International Conference on Multimedia Retrieval (ICMR ’19). Association for Computing Machinery, New York, NY, USA, 354–363. DOI:https://doi.org/10.1145/3323873.3326590

Cross-database micro-expression recognition : a benchmark

Saved in:
Author: Zong, Yuan1; Zheng, Wenming2; Hong, Xiaopeng3;
Organizations: 1School of Biological Science and Medical Engineering, Southeast University Nanjing, China
2Key Laboratory of Child Development and Learning Science of Ministry of Education, Southeast University Nanjing, China
3Xi’an Jiaotong University Xi’an, China
4School of Computer Science and Engineering, Nanjing University of Science and Technology Nanjing, China
5Center for Machine Vision and Signal Analysis, University of Oulu Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 1.2 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2020042322151
Language: English
Published: Association for Computing Machinery, 2018
Publish Date: 2020-04-23
Description:

Abstract

Cross-database micro-expression recognition (CDMER) is one of recently emerging and interesting problems in micro-expression analysis. CDMER is more challenging than the conventional micro-expression recognition (MER), because the training and testing samples in CDMER come from different micro-expression databases, resulting in inconsistency of the feature distributions between the training and testing sets. In this paper, we contribute to this topic from two aspects. First, we establish a CDMER experimental evaluation protocol and provide a standard platform for evaluating their proposed methods. Second, we conduct extensive benchmark experiments by using NINE state-of-the-art domain adaptation (DA) methods and SIX popular spatiotemporal descriptors for investigating the CDMER problem from two different perspectives and deeply analyze and discuss the experimental results. In addition, all the data and codes involving CDMER in this paper are released on our project website: http://aip.seu.edu.cn/cdmer.

see all

ISBN Print: 978-1-4503-6765-3
Pages: 354 - 363
DOI: 10.1145/3323873.3326590
OADOI: https://oadoi.org/10.1145/3323873.3326590
Host publication: Proceeding ICMR '19 Proceedings of the 2019 on International Conference on Multimedia Retrieval
Conference: International Conference on Multimedia Retrieval
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work is supported by the National Key R&D Program of China under Grant 2018YFB1305200, the National Natural Science Foundation of China under Grant 61572009 and Grant 61772419, the Fundamental Research Funds for the Central Universities under Grant 2242018K3DN01 and Grant 2242019K40047, the Tencent AI Lab Rhino-Bird Focused Research Program under Grant JR201922, Academy of Finland, Tekes Fidipro Program, and Infotech Oulu.
Copyright information: © Association for Computing Machinery 2019. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in {Source Publication}, https://doi.org/10.1145/3323873.3326590.