HAN Zhiyan, WANG Jian, WANG Xu, et al., “Robust Feature Extraction for Speech Recognition Based on Perceptually Motivated MUSIC and CCBC,” Chinese Journal of Electronics, vol. 20, no. 1, pp. 105-110, 2011,
Citation: HAN Zhiyan, WANG Jian, WANG Xu, et al., “Robust Feature Extraction for Speech Recognition Based on Perceptually Motivated MUSIC and CCBC,” Chinese Journal of Electronics, vol. 20, no. 1, pp. 105-110, 2011,

Robust Feature Extraction for Speech Recognition Based on Perceptually Motivated MUSIC and CCBC

  • Received Date: 2009-02-01
  • Rev Recd Date: 2010-10-01
  • Publish Date: 2011-01-05
  • A novel feature extraction algorithm was proposed to improve the robustness of speech recognition. Core technology was incorporating perceptual information into the Multiple signal classification (MUSIC) spectrum, it provided improved robustness and computational efficiency comparing with the Mel frequency cepstral coefficient (MFCC) technique, then the cepstrum coefficients were extracted as the feature parameter. The effectiveness of the parameter was discussed in view of the class separability and speaker variability properties. To improve the robustness, we considered incorporating Canonical correlation based compensation (CCBC) to cope with the mismatch between training and test set. We evaluated the technique using improved Back-propagation neural networks (BPNN) in three different tasks: in different speakers, different recording channels and different noisy environments. The experimental results show that the novel feature has well robustness and effectiveness relative to MFCC and the CCBC algorithm can make speech recognition system robust in all three kinds of mismatch.
  • loading
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (887) PDF downloads(736) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return