利用跨模态自监督技术解决语音嵌入问题(CS SD)

  • 2020 年 3 月 17 日
  • 笔记

本文的目标是学习说话人身份的表示,而不需要手动注释数据。为此,我们开发了一种自我监督学习目标,它利用了人脸和视频音频之间的自然跨模态同步。我们的方法背后的关键思想是梳理——不加注释——语言内容和说话人身份的表征。我们构造了一个两流体系结构:(1)共享两种表示法共有的底层特征;(2)提供了一种自然的机制来明确地分离这些因素,为内容和身份的新组合提供了更大的泛化可能性,并最终生成更健壮的说话人身份表示。我们在一个大规模的“野外”说话头的视听数据集上训练了我们的方法,并通过评估标准说话人识别性能的习得说话人表示来证明其有效性。

原文题目:Disentangled Speech Embeddings using Cross-modal Self-supervision

原文:The objective of this paper is to learn representations of speaker identity without access to manually annotated data. To do so, we develop a self-supervised learning objective that exploits the natural cross-modal synchrony between faces and audio in video. The key idea behind our approach is to tease apart—without annotation—the representations of linguistic content and speaker identity. We construct a two-stream architecture which: (1) shares low-level features common to both representations; and (2) provides a natural mechanism for explicitly disentangling these factors, offering the potential for greater generalisation to novel combinations of content and identity and ultimately producing speaker identity representations that are more robust. We train our method on a large-scale audio-visual dataset of talking heads `in the wild', and demonstrate its efficacy by evaluating the learned speaker representations for standard speaker recognition performance.

原文作者:Arsha Nagrani, Joon Son Chung, Samuel Albanie, Andrew Zisserman

原文地址:http://cn.arxiv.org/abs/2002.08742