作为文化学习模型的机器学习:一种意味着肥胖的算法教学(CS AI)
- 2020 年 4 月 2 日
- 筆記
超重的人,特别是妇女,被贬低为不道德,不健康和低下的阶级。这些负面观念并非肥胖所固有。它们是文化学习的糟粕。学者们经常将媒体消费作为学习文化偏见的关键机制,但目前尚不清楚这种公共文化如何成为私人文化。在这里,我们提供了这种学习机制的计算说明,表明可以从新闻报道中学习这种文化认知图式。我们使用受人类认知启发的神经语言模型word2vec从纽约时报的文章中提取有关肥胖的认知图式。我们确定了几种将肥胖与性别,不道德,不良健康和低社会经济地位联系起来的文化图式。我们的语言可能会潜移默化地激活这种模式;从而,语言会慢性的产生偏见(例如,关于体重和健康)。我们的发现还加强了人们对机器学习可以编码和复制有害的人类偏见的持续关注。
原文题目:Machine learning as a model for cultural learning: Teaching an algorithm what it means to be fat
原文:Overweight individuals, and especially women, are disparaged as immoral, unhealthy, and low class. These negative conceptions are not intrinsic to obesity; they are the tainted fruit of cultural learning. Scholars often cite media consumption as a key mechanism for learning cultural biases, but it remains unclear how this public culture becomes private culture. Here we provide a computational account of this learning mechanism, showing that cultural schemata can be learned from news reporting. We extract schemata about obesity from New York Times articles with word2vec, a neural language model inspired by human cognition. We identify several cultural schemata that link obesity to gender, immorality, poor health, and low socioeconomic class. Such schemata may be subtly but pervasively activated by our language; thus, language can chronically reproduce biases (e.g., about weight and health). Our findings also reinforce ongoing concerns that machine learning can encode, and reproduce, harmful human biases.
原文作者:Alina Arseniev-Koehler, Jacob G. Foster
原文地址:https://arxiv.org/abs/2003.12133