使用語音增強和注意力模型的說話人識別系統(CS SD)
- 2020 年 1 月 19 日
- 筆記
提出了一種基於級聯語音增強和語音處理的語音識別體系結構。其目的是提高語音訊號被雜訊干擾時的語音識別性能。與單獨處理語音增強和說話人識別不同,這兩個模組通過使用深度神經網路的聯合優化集成到一個框架中。此外,為了增強對雜訊的魯棒性,我們使用了一個多級注意機制來突出說話人在時域和頻域從上下文資訊中獲得的相關特徵。為了評估該方法的說話人識別和驗證性能,我們在VoxCeleb1的數據集上測試了它,VoxCeleb1是最常用的基準數據集之一。此外,我們提出的方法的魯棒性也在VoxCeleb1數據上進行了測試,當在不同的信噪比(SNR)水平上,VoxCeleb1數據被三種類型的干擾(一般雜訊、音樂和嘈雜聲)破壞時。實驗結果表明,該方法利用語音增強和多階段注意模型,在大多數聲學條件下都優於兩個強基準線。
原文題目:Robust Speaker Recognition Using Speech Enhancement And Attention Model
原文:In this paper, a novel architecture for speaker recognition is proposed by cascading speech enhancement and speaker processing. Its aim is to improve speaker recognition performance when speech signals are corrupted by noise. Instead of individually processing speech enhancement and speaker recognition, the two modules are integrated into one framework by a joint optimisation using deep neural networks. Furthermore, to increase robustness against noise, a multi-stage attention mechanism is employed to highlight the speaker related features learned from context information in time and frequency domain. To evaluate speaker identification and verification performance of the proposed approach, we test it on the dataset of VoxCeleb1, one of mostly used benchmark datasets. Moreover, the robustness of our proposed approach is also tested on VoxCeleb1 data when being corrupted by three types of interferences, general noise, music, and babble, at different signal-to-noise ratio (SNR) levels. The obtained results show that the proposed approach using speech enhancement and multi-stage attention models outperforms two strong baselines not using them in most acoustic conditions in our experiments.
原文作者:Yanpei Shi, Qiang Huang, Thomas Hain
原文地址:https://arxiv.org/abs/2001.05031