人類對機器學習公平的理解(CS Society)
- 2020 年 1 月 3 日
- 筆記
對機器學習的偏見在醫學,招聘和刑事司法等多個領域導致了不公。 作為回應,計算機科學家已經開發出無數種公平性定義,以糾正這些領域算法中的偏差。 雖然某些定義基於既定的法律和道德規範,但其他定義很大程度上是數學上的。我們尚不清楚公眾是否同意這些公平定義,但也許更重要的問題是,他們是否理解這些定義。 我們通過解決以下問題,邁出了彌合ML研究人員與公眾之間差距的第一步:非技術人員是否理解ML公平性的基本定義? 我們開發了一種度量標準,用于衡量對這個定義(人口統計平價)的理解。 我們使用在線調查來驗證該指標,並研究理解力與情感,人口統計學以及當下應用之間的關係。
原文題目:Human Comprehension of Fairness in Machine Learning
原文:Bias in machine learning has manifested injustice in several areas, such as medicine, hiring, and criminal justice. In response, computer scientists have developed myriad definitions of fairness to correct this bias in fielded algorithms. While some definitions are based on established legal and ethical norms, others are largely mathematical. It is unclear whether the general public agrees with these fairness definitions, and perhaps more importantly, whether they understand these definitions. We take initial steps toward bridging this gap between ML researchers and the public, by addressing the question: does a non-technical audience understand a basic definition of ML fairness? We develop a metric to measure comprehension of one such definition–demographic parity. We validate this metric using online surveys, and study the relationship between comprehension and sentiment, demographics, and the application at hand.
原文作者:Debjani Saha,Candice Schumann,Duncan C. McElfresh,John P. Dickerson,Michelle L. Mazurek,Michael Carl Tschantz
原文地址:https://arxiv.org/abs/2001.00089