使用语音增强和注意力模型的说话人识别系统(CS SD)

  • 2020 年 1 月 19 日
  • 筆記

提出了一种基于级联语音增强和语音处理的语音识别体系结构。其目的是提高语音信号被噪声干扰时的语音识别性能。与单独处理语音增强和说话人识别不同,这两个模块通过使用深度神经网络的联合优化集成到一个框架中。此外,为了增强对噪声的鲁棒性,我们使用了一个多级注意机制来突出说话人在时域和频域从上下文信息中获得的相关特征。为了评估该方法的说话人识别和验证性能,我们在VoxCeleb1的数据集上测试了它,VoxCeleb1是最常用的基准数据集之一。此外,我们提出的方法的鲁棒性也在VoxCeleb1数据上进行了测试,当在不同的信噪比(SNR)水平上,VoxCeleb1数据被三种类型的干扰(一般噪声、音乐和嘈杂声)破坏时。实验结果表明,该方法利用语音增强和多阶段注意模型,在大多数声学条件下都优于两个强基线。

原文题目:Robust Speaker Recognition Using Speech Enhancement And Attention Model

原文:In this paper, a novel architecture for speaker recognition is proposed by cascading speech enhancement and speaker processing. Its aim is to improve speaker recognition performance when speech signals are corrupted by noise. Instead of individually processing speech enhancement and speaker recognition, the two modules are integrated into one framework by a joint optimisation using deep neural networks. Furthermore, to increase robustness against noise, a multi-stage attention mechanism is employed to highlight the speaker related features learned from context information in time and frequency domain. To evaluate speaker identification and verification performance of the proposed approach, we test it on the dataset of VoxCeleb1, one of mostly used benchmark datasets. Moreover, the robustness of our proposed approach is also tested on VoxCeleb1 data when being corrupted by three types of interferences, general noise, music, and babble, at different signal-to-noise ratio (SNR) levels. The obtained results show that the proposed approach using speech enhancement and multi-stage attention models outperforms two strong baselines not using them in most acoustic conditions in our experiments.

原文作者:Yanpei Shi, Qiang Huang, Thomas Hain

原文地址:https://arxiv.org/abs/2001.05031