人类对机器学习公平的理解(CS Society)

对机器学习的偏见在医学,招聘和刑事司法等多个领域导致了不公。 作为回应,计算机科学家已经开发出无数种公平性定义,以纠正这些领域算法中的偏差。 虽然某些定义基于既定的法律和道德规范,但其他定义很大程度上是数学上的。我们尚不清楚公众是否同意这些公平定义,但也许更重要的问题是,他们是否理解这些定义。 我们通过解决以下问题,迈出了弥合ML研究人员与公众之间差距的第一步:非技术人员是否理解ML公平性的基本定义? 我们开发了一种度量标准,用于衡量对这个定义(人口统计平价)的理解。 我们使用在线调查来验证该指标,并研究理解力与情感,人口统计学以及当下应用之间的关系。

原文题目:Human Comprehension of Fairness in Machine Learning

原文:Bias in machine learning has manifested injustice in several areas, such as medicine, hiring, and criminal justice. In response, computer scientists have developed myriad definitions of fairness to correct this bias in fielded algorithms. While some definitions are based on established legal and ethical norms, others are largely mathematical. It is unclear whether the general public agrees with these fairness definitions, and perhaps more importantly, whether they understand these definitions. We take initial steps toward bridging this gap between ML researchers and the public, by addressing the question: does a non-technical audience understand a basic definition of ML fairness? We develop a metric to measure comprehension of one such definition–demographic parity. We validate this metric using online surveys, and study the relationship between comprehension and sentiment, demographics, and the application at hand.

原文作者:Debjani Saha,Candice Schumann,Duncan C. McElfresh,John P. Dickerson,Michelle L. Mazurek,Michael Carl Tschantz

原文地址:https://arxiv.org/abs/2001.00089