University of Oulu

T. T. Quynh Le, T. -K. Tran and M. Rege, "Dynamic image for micro-expression recognition on region-based framework," 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI), Las Vegas, NV, USA, 2020, pp. 75-81, doi: 10.1109/IRI49571.2020.00019

Dynamic image for micro-expression recognition on region-based framework

Saved in:
Author: Le, Trang Thanh Quynh1; Tran, Thuong-Khanh2; Rege, Manjeet1
Organizations: 1School of Engineering University of St. Thomas St Paul, MN, USA
2CMVS University of Oulu Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.2 MB)
Persistent link:
Language: English
Published: Institute of Electrical and Electronics Engineers, 2020
Publish Date: 2020-11-20


Facial micro-expressions are involuntary facial expressions with low intensity and short duration natures in which hidden emotions can be revealed. Micro-expression analysis has been increasingly received tremendous attention and become advanced in the field of computer vision. However, it appears to be very challenging and requires resources to a greater extent to study micro-expressions. Most of the recent works have attempted to improve the spontaneous facial micro-expression recognition with sophisticated and hand-crafted feature extraction techniques. The use of deep neural networks has also been adopted to leverage this task. In this paper, we present a compact framework where a rank pooling concept called dynamic image is employed as a descriptor to extract informative features on certain regions of interests along with a convolutional neural network (CNN) deployed on elicited dynamic images to recognize micro-expressions therein. Particularly, facial motion magnification technique is applied on input sequences to enhance the magnitude of facial movements in the data. Subsequently, rank pooling is implemented to attain dynamic images. Only a fixed number of localized facial areas are extracted on the dynamic images based on observed dominant muscular changes. CNN models are fit to the final feature representation for emotion classification task. The framework is simple compared to that of other findings, yet the logic behind it justifies the effectiveness by the experimental results we achieved throughout the study. The experiment is evaluated on three state-of-the-art databases CASMEII, SMIC and SAMM.

see all

ISBN: 978-1-7281-1054-7
ISBN Print: 978-1-7281-1055-4
Pages: 75 - 81
Article number: 9191391
DOI: 10.1109/IRI49571.2020.00019
Host publication: 21st IEEE International Conference on Information Reuse and Integration for Data Science, IRI 2020
Conference: IEEE International Conference on Information Reuse and Integration for Data Science
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Copyright information: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.