作為文化學習模型的機器學習:一種意味著肥胖的演算法教學(CS AI)
- 2020 年 4 月 2 日
- 筆記
超重的人,特別是婦女,被貶低為不道德,不健康和低下的階級。這些負面觀念並非肥胖所固有。它們是文化學習的糟粕。學者們經常將媒體消費作為學習文化偏見的關鍵機制,但目前尚不清楚這種公共文化如何成為私人文化。在這裡,我們提供了這種學習機制的計算說明,表明可以從新聞報道中學習這種文化認知圖式。我們使用受人類認知啟發的神經語言模型word2vec從紐約時報的文章中提取有關肥胖的認知圖式。我們確定了幾種將肥胖與性別,不道德,不良健康和低社會經濟地位聯繫起來的文化圖式。我們的語言可能會潛移默化地激活這種模式;從而,語言會慢性的產生偏見(例如,關於體重和健康)。我們的發現還加強了人們對機器學習可以編碼和複製有害的人類偏見的持續關注。
原文題目:Machine learning as a model for cultural learning: Teaching an algorithm what it means to be fat
原文:Overweight individuals, and especially women, are disparaged as immoral, unhealthy, and low class. These negative conceptions are not intrinsic to obesity; they are the tainted fruit of cultural learning. Scholars often cite media consumption as a key mechanism for learning cultural biases, but it remains unclear how this public culture becomes private culture. Here we provide a computational account of this learning mechanism, showing that cultural schemata can be learned from news reporting. We extract schemata about obesity from New York Times articles with word2vec, a neural language model inspired by human cognition. We identify several cultural schemata that link obesity to gender, immorality, poor health, and low socioeconomic class. Such schemata may be subtly but pervasively activated by our language; thus, language can chronically reproduce biases (e.g., about weight and health). Our findings also reinforce ongoing concerns that machine learning can encode, and reproduce, harmful human biases.
原文作者:Alina Arseniev-Koehler, Jacob G. Foster
原文地址:https://arxiv.org/abs/2003.12133