利用跨模態自監督技術解決語音嵌入問題(CS SD)

  • 2020 年 3 月 17 日
  • 筆記

本文的目標是學習說話人身份的表示,而不需要手動注釋數據。為此,我們開發了一種自我監督學習目標,它利用了人臉和影片音頻之間的自然跨模態同步。我們的方法背後的關鍵思想是梳理——不加註釋——語言內容和說話人身份的表徵。我們構造了一個兩流體系結構:(1)共享兩種表示法共有的底層特徵;(2)提供了一種自然的機制來明確地分離這些因素,為內容和身份的新組合提供了更大的泛化可能性,並最終生成更健壯的說話人身份表示。我們在一個大規模的「野外」說話頭的視聽數據集上訓練了我們的方法,並通過評估標準說話人識別性能的習得說話人表示來證明其有效性。

原文題目:Disentangled Speech Embeddings using Cross-modal Self-supervision

原文:The objective of this paper is to learn representations of speaker identity without access to manually annotated data. To do so, we develop a self-supervised learning objective that exploits the natural cross-modal synchrony between faces and audio in video. The key idea behind our approach is to tease apart—without annotation—the representations of linguistic content and speaker identity. We construct a two-stream architecture which: (1) shares low-level features common to both representations; and (2) provides a natural mechanism for explicitly disentangling these factors, offering the potential for greater generalisation to novel combinations of content and identity and ultimately producing speaker identity representations that are more robust. We train our method on a large-scale audio-visual dataset of talking heads `in the wild', and demonstrate its efficacy by evaluating the learned speaker representations for standard speaker recognition performance.

原文作者:Arsha Nagrani, Joon Son Chung, Samuel Albanie, Andrew Zisserman

原文地址:http://cn.arxiv.org/abs/2002.08742