University of Oulu

Y. Zong, W. Zheng, Z. Cui, G. Zhao and B. Hu, "Toward Bridging Microexpressions From Different Domains," in IEEE Transactions on Cybernetics. doi: 10.1109/TCYB.2019.2914512

Toward bridging microexpressions from different domains

Saved in:
Author: Zong, Yuan1,2; Zheng, Wenming1; Cui, Zhen3;
Organizations: 1Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
2Center for Machine Vision and Signal AnalysiFaulty of Information Technology and Electrical Engineering, University of Oulu, 90014 Oulu, Finland
3School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
4Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 1.5 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2019120445593
Language: English
Published: Institute of Electrical and Electronics Engineers, 2019
Publish Date: 2019-12-04
Description:

Abstract

Recently, microexpression recognition has attracted a lot of researchers’ attention due to its challenges and valuable applications. However, it is noticed that currently most of the existing proposed methods are often evaluated and tested on the single database and, hence, this brings us a question whether these methods are still effective if the training and testing samples belong to different domains, for example, different microexpression databases. In this case, a large feature distribution difference may exist between training (source) and testing (target) samples and, hence, microexpression recognition tasks would become more difficult. To solve this challenging problem, that is, cross-domain microexpression recognition, in this paper, we propose an effective method consisting of an auxiliary set selection model (ASSM) and a transductive transfer regression model (TTRM). In our method, an ASSM is designed to automatically select an optimal set of samples from the target domain to serve as the auxiliary set, which is used for subsequent TTRM training. As for TTRM, it aims at bridging the feature distribution gap between the source and target domains by learning a joint regression model with the source domain samples and the auxiliary set selected from the target domain. We evaluate the proposed TTRM plus ASSM by extensive cross-domain microexpression recognition experiments on SMIC and CASME II databases. Compared with the recent state-of-the-art domain adaptation methods, our proposed method has a more satisfactory performance in dealing with the cross-domain microexpression recognition tasks.

see all

Series: IEEE transactions on cybernetics
ISSN: 2168-2267
ISSN-E: 2168-2275
ISSN-L: 2168-2267
Volume: Early access
Issue: Early access
Pages: 1 - 14
DOI: 10.1109/TCYB.2019.2914512
OADOI: https://oadoi.org/10.1109/TCYB.2019.2914512
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB1305200, in part by the National Basic Research Program of China under Grant 2015CB351704, in part by the National Natural Science Foundation of China under Grant 61572009, Grant 61632014, Grant 61802058, and Grant 6181101568, in part by the Fundamental Research Funds for the Central Universities under Grant 2242018K3DN01 and Grant 2242019K40047, in part by the China Scholarship Council, in part by the Tencent AI Lab Rhino-Bird Focused Research Program under Grant JR201922, in part by the Academy of Finland, in part by the Tekes Fidipro Program, and in part by Infotech Oulu.
Copyright information: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.