University of Oulu

Zhou, L., Xu, Y., Wang, T., Feng, K., Shi, J. (2021). Microphone Array Speech Separation Algorithm Based on TC-ResNet. CMC-Computers, Materials & Continua, 69(2), 2705–2716, https://doi.org/10.32604/cmc.2021.017080

Microphone array speech separation algorithm based on TC-ResNet

Saved in:
Author: Zhou, Lin1; Xu, Yue1; Wang, Tianyi1;
Organizations: 1School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
2Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, FI-90014, Finland
Format: article
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 0.6 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2021120859638
Language: English
Published: Tech Science Press, 2021
Publish Date: 2021-12-08
Description:

Abstract

Traditional separation methods have limited ability to handle the speech separation problem in high reverberant and low signal-to-noise ratio (SNR) environments, and thus achieve unsatisfactory results. In this study, a convolutional neural network with temporal convolution and residual network (TC-ResNet) is proposed to realize speech separation in a complex acoustic environment. A simplified steered-response power phase transform, denoted as GSRP-PHAT, is employed to reduce the computational cost. The extracted features are reshaped to a special tensor as the system inputs and implements temporal convolution, which not only enlarges the receptive field of the convolution layer but also significantly reduces the network computational cost. Residual blocks are used to combine multiresolution features and accelerate the training procedure. A modified ideal ratio mask is applied as the training target. Simulation results demonstrate that the proposed microphone array speech separation algorithm based on TC-ResNet achieves a better performance in terms of distortion ratio, source-to-interference ratio, and short-time objective intelligibility in low SNR and high reverberant environments, particularly in untrained situations. This indicates that the proposed method has generalization to untrained conditions.

see all

Series: Computers, materials & continua
ISSN: 1546-2218
ISSN-E: 1546-2226
ISSN-L: 1546-2218
Volume: 69
Issue: 2
Pages: 2705 - 2716
DOI: 10.32604/cmc.2021.017080
OADOI: https://oadoi.org/10.32604/cmc.2021.017080
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work is supported by the National Key Research and Development Program of China under Grant 2020YFC2004003 and Grant 2020YFC2004002, and the National Nature Science Foundation of China (NSFC) under Grant No. 61571106.
Copyright information: This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  https://creativecommons.org/licenses/by/4.0/