Multilogue-Net:一种用于会话中多模态情绪检测和情绪分析的上下文感知RNN(CS SD)
- 2020 年 3 月 17 日
- 筆記
会话中的情绪分析和情绪检测是许多实际应用程序的关键,不同的应用程序利用不同类型的数据能够实现合理准确的预测。多模态情绪检测和情绪分析可能特别有用,因为应用程序将能够使用特定的可用模式子集,根据它们的可用数据,能够产生相关的预测。当前处理多模态功能的系统无法通过所有的模态来利用和捕获对话的上下文,当前的说话者和听众在对话中,以及通过适当的融合机制在可用的模态之间的相关性和关系。在这篇论文中,我们提出了一个递归神经网络架构,它试图考虑所有提到的缺点,并跟踪对话的上下文、对话者的状态和说话者在对话中所传达的情绪。我们提出的模型在两个基准数据集上执行最先进的状态,具有各种精度和回归度量。
原文题目:Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation
原文:Sentiment Analysis and Emotion Detection in conversation is key in a number of real-world applications, with different applications leveraging different kinds of data to be able to achieve reasonably accurate predictions. Multimodal Emotion Detection and Sentiment Analysis can be particularly useful as applications will be able to use specific subsets of the available modalities, as per their available data, to be able to produce relevant predictions. Current systems dealing with Multimodal functionality fail to leverage and capture the context of the conversation through all modalities, the current speaker and listener(s) in the conversation, and the relevance and relationship between the available modalities through an adequate fusion mechanism. In this paper, we propose a recurrent neural network architecture that attempts to take into account all the mentioned drawbacks, and keeps track of the context of the conversation, interlocutor states, and the emotions conveyed by the speakers in the conversation. Our proposed model out performs the state of the art on two benchmark datasets on a variety of accuracy and regression metrics.
原文作者:Aman Shenoy, Ashish Sardana
原文地址:http://cn.arxiv.org/abs/2002.08267