University of Oulu

P. Tzirakis, G. Trigeorgis, M. A. Nicolaou, B. W. Schuller and S. Zafeiriou, "End-to-End Multimodal Emotion Recognition Using Deep Neural Networks," in IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1301-1309, Dec. 2017. doi: 10.1109/JSTSP.2017.2764438

End-to-end multimodal emotion recognition using deep neural networks

Saved in:
Author: Tzirakis, Panagiotis1; Trigeorgis, George1; Nicolaou, Mihalis A.2;
Organizations: 1e Department of Computing, Imperial College London, London SW7 2AZ, U.K.
2Department of Computing, Goldsmiths University of London, London SE14 6NW, U.K.
3e Group on Language, Audio and Music, Imperial College London, London SW7 2AZ, U.K.
4Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg 86159, Germany
5Department of Computing, Imperial College London, London SW7 2AZ, U.K.
6e Center for Machine Vision and Signal Analysis, University of Oulu, Oulu 90014, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.7 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe201902276476
Language: English
Published: Institute of Electrical and Electronics Engineers, 2017
Publish Date: 2019-02-27
Description:

Abstract

Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human-computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a convolutional neural network (CNN) to extract features from the speech, while for the visual modality a deep residual network of 50 layers is used. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, long short-term memory networks are utilized. The system is then trained in an end-to-end fashion where-by also taking advantage of the correlations of each of the streams-we manage to significantly outperform, in terms of concordance correlation coefficient, traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.

see all

Series: IEEE journal of selected topics in signal processing
ISSN: 1932-4553
ISSN-E: 1941-0484
ISSN-L: 1932-4553
Volume: 11
Issue: 8
Pages: 1301 - 1309
DOI: 10.1109/JSTSP.2017.2764438
OADOI: https://oadoi.org/10.1109/JSTSP.2017.2764438
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported by the EPSRC Center for Doctoral Training in High Performance Embedded and Distributed Systems (HiPEDS) under Grant EP/L016796/1. The work of G. Trigeorgis was supported in part by the Google Fellowship in Machine Perception, Speech Technology and Computer Vision. The work of B. W. Schuller was supported in part by the EU Horizon 2020 Framework Programme (RIA ARIA VALUSPA, under Grant 645378, IA SEWA #645094). The work of S. Zafeiriou was supported in part by the FiDiPro Program of Tekes under Project 1849/31/2015. The guest editor coordinating the review of this paper and approving it for publication was Dr. Nancy F. Chen.
Copyright information: © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.