N. Liu et al., "Unsupervised Cross-Corpus Speech Emotion Recognition Using Domain-Adaptive Subspace Learning," 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, 2018, pp. 5144-5148. doi: 10.1109/ICASSP.2018.8461848
Unsupervised cross-corpus speech emotion recognition using domain-adaptive subspace learning
|Author:||Liu, Na1,2,3; Zong, Yuan4; Zhang, Baofeng3,1;|
1School of Computer Science and Engineering, Tianjin University of Technology, China
2Center for Machine Vision and Signal Analysis, University of Oulu, Finland
3School of Electrical and Electronic Engineering, Tianjin University of Technology, China
4Research Center for Learning Science, Southeast University, China
|Online Access:||PDF Full Text (PDF, 0.2 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2019040411106
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2019-04-04
In this paper, we investigate an interesting problem, i.e., unsupervised cross-corpus speech emotion recognition (SER), in which the training and testing speech signals come from two different speech emotion corpora. Meanwhile, the training speech signals are labeled, while the label information of the testing speech signals is entirely unknown. Due to this setting, the training (source) and testing (target) speech signals may have different feature distributions and therefore lots of existing SER methods would not work. To deal with this problem, we propose a domain-adaptive subspace learning (DoSL) method for learning a projection matrix with which we can transform the source and target speech signals from the original feature space to the label space. The transformed source and target speech signals in the label space would have similar feature distributions. Consequently, the classifier learned on the labeled source speech signals can effectively predict the emotional states of the unlabeled target speech signals. To evaluate the performance of the proposed DoSL method, we carry out extensive cross-corpus SER experiments on three speech emotion corpora including EmoDB, eNTERFACE, and AFEW 4.0. Compared with recent state-of-the-art cross-corpus SER methods, the proposed DoSL can achieve more satisfactory overall results.
Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing
|Pages:||5144 - 5148|
2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
IEEE International Conference on Acoustics, Speech and Signal Processing
|Type of Publication:||
A4 Article in conference proceedings
|Field of Science:||
113 Computer and information sciences
This research was supported by the Natural Science Foundation of China under Grants 61172185 and 61602345, the Application Foundation and Advanced Technology Research Project of Tianjin, the Academy of Finland, Tekes Fidipro program and Infotech Oulu.
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.