Skip to main content
Kent Academic Repository

Source-Aware Context Network for Single-Channel Multi-speaker Speech Separation

Li, Zengxi, Song, Yan, Dai, Li-Rong, McLoughlin, Ian Vince (2018) Source-Aware Context Network for Single-Channel Multi-speaker Speech Separation. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). . pp. 681-685. IEEE ISBN 978-1-5386-4659-5. E-ISBN 978-1-5386-4658-8. (doi:10.1109/ICASSP.2018.8461578) (KAR id:67161)


Deep learning based approaches have achieved promising performance in speaker-dependent single-channel multi-speaker speech separation.However, partly due to the label permutation problem, they may encounter difficulties in speaker-independent conditions. Recent methods address this problem by some assignment operations. Different from them, we propose a novel source-aware context network, which explicitly inputs speech sources as well as mixture signal. By exploiting the temporal dependency and continuity of the same source signal, the permutation order of outputs can be easily determined without any additional post-processing. Furthermore, a Multi-time-step Prediction Training strategy is proposed to address the mismatch between training and inference stages. Experimental results on benchmark WSJ0-2mix dataset revealed that our network achieved comparable or better results than state-of-the-art methods in both closed-set and open-set conditions, in terms of Signal-to-Distortion Ratio (SDR) improvement.

Item Type: Conference or workshop item (Proceeding)
DOI/Identification number: 10.1109/ICASSP.2018.8461578
Subjects: T Technology
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Depositing User: Ian McLoughlin
Date Deposited: 30 May 2018 11:59 UTC
Last Modified: 09 Dec 2022 07:33 UTC
Resource URI: (The current URI for this page, for reference purposes)

University of Kent Author Information

McLoughlin, Ian Vince.

Creator's ORCID:
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.