SUN Linhui, LIANG Wenqing, ZHANG Meng, LI Ping’an. Monaural Speech Separation Using Dual-Output Deep Neural Network with Multiple Joint Constraint[J]. Chinese Journal of Electronics, 2023, 32(3): 493-506. DOI: 10.23919/cje.2022.00.110
Citation: SUN Linhui, LIANG Wenqing, ZHANG Meng, LI Ping’an. Monaural Speech Separation Using Dual-Output Deep Neural Network with Multiple Joint Constraint[J]. Chinese Journal of Electronics, 2023, 32(3): 493-506. DOI: 10.23919/cje.2022.00.110

Monaural Speech Separation Using Dual-Output Deep Neural Network with Multiple Joint Constraint

  • Monaural speech separation is a significant research field in speech signal processing. To achieve a better separation performance, we propose three novel joint-constraint loss functions and a multiple joint-constraint loss function for monaural speech separation based on dual-output deep neural network (DNN). The multiple joint-constraint loss function for DNN separation model not only restricts the ideal ratio mask (IRM) errors of the two outputs, but also constrains the relationship of the estimated IRMs and the magnitude spectrograms of the clean speech signals, the relationship of the estimated IRMs of the two outputs, and the relationship of the estimated IRMs and the magnitude spectrogram of the mixed signal. The constraint strength is adjusted through three parameters to improve the accuracy of the speech separation model. Furthermore, we solve the optimal weighting coefficients of the multiple joint-constraint loss function based on the optimization idea, which further improves the performance of the separation system. We conduct a series of speech separation experiments on the GRID corpus to validate the superiority performance of the proposed method. The results show that using perceptual evaluation of speech quality, the short-time objective intelligibility, source to distortion ratio, signal to interference ratio and source to artifact ratio as the evaluation metrics, the proposed method outperforms the conventional DNN separation model. Taking the gender into consideration, we carry out experiments among Female-Female, Male-Male and Male-Female cases, which show that our method improves the robustness and performance of the separation system compared with some previous approaches.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return