University of Oulu

X. Zhao, L. Zhou, Y. Xie, Y. Tong and J. Shi, "Speech separation algorithm using gated recurrent network based on microphone array," Intelligent Automation & Soft Computing, vol. 36, no.3, pp. 3087–3100, 2023,

Speech separation algorithm using gated recurrent network based on microphone array

Saved in:
Author: Zhao, Xiaoyan1; Zhou, Lin2; Xie, Yue1;
Organizations: 1School of Information and Communication Engineering, Nanjing Institute of Technology, Nanjing, 211167, China
2School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
3University of Oulu, Oulu, 90014, FI, Finland
Format: article
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 0.3 MB)
Persistent link:
Language: English
Published: Tech Science Press, 2023
Publish Date: 2023-09-08


Speech separation is an active research topic that plays an important role in numerous applications, such as speaker recognition, hearing prosthesis, and autonomous robots. Many algorithms have been put forward to improve separation performance. However, speech separation in reverberant noisy environment is still a challenging task. To address this, a novel speech separation algorithm using gate recurrent unit (GRU) network based on microphone array has been proposed in this paper. The main aim of the proposed algorithm is to improve the separation performance and reduce the computational cost. The proposed algorithm extracts the sub-band steered response power-phase transform (SRP-PHAT) weighted by gammatone filter as the speech separation feature due to its discriminative and robust spatial position information. Since the GRU network has the advantage of processing time series data with faster training speed and fewer training parameters, the GRU model is adopted to process the separation features of several sequential frames in the same sub-band to estimate the ideal Ratio Masking (IRM). The proposed algorithm decomposes the mixture signals into time-frequency (TF) units using gammatone filter bank in the frequency domain, and the target speech is reconstructed in the frequency domain by masking the mixture signal according to the estimated IRM. The operations of decomposing the mixture signal and reconstructing the target signal are completed in the frequency domain which can reduce the total computational cost. Experimental results demonstrate that the proposed algorithm realizes omnidirectional speech separation in noisy and reverberant environments, provides good performance in terms of speech quality and intelligibility, and has the generalization capacity to reverberate.

see all

Series: Intelligent automation & soft computing
ISSN: 1079-8587
ISSN-E: 2326-005X
ISSN-L: 1079-8587
Volume: 36
Issue: 3
Pages: 3087 - 3100
DOI: 10.32604/iasc.2023.030180
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Funding: This work is supported by Nanjing Institute of Technology (NIT) fund for Research Startup Projects of Introduced talents under Grant No. YKJ202019, Nature Science Research Project of Higher Education Institutions in Jiangsu Province under Grant No. 21KJB510018, National Nature Science Foundation of China (NSFC) under Grant No. 62001215, and NIT fund for Doctoral Research Projects under Grant No. ZKJ2020003.
Copyright information: © The Author(s) 2023. This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.