带有偏见的嘈杂的标签的公平性评估(CS AI)
- 2020 年 4 月 2 日
- 筆記
风险评估工具在全国范围内广泛用于在刑事司法系统内为决策提供依据。最近,人们对这种工具是否可能遭受种族偏见的问题投入了大量关注。在这种类型的评估中,一个基本问题是模型的训练和评估基于变量(逮捕),该变量可能表示更多的集中关注(犯罪)的未观察到的结果的嘈杂版本。我们提出了一个敏感性分析框架,用于评估跨群体噪声的假设如何影响作为再犯的预测因素的风险评估模型的预测偏差属性。我们在两个真实的现实世界刑事司法数据集上的实验结果表明,即使是观察到的标签中的小偏差,也可能对基于噪声结果的分析结论提出质疑。
原文题目:Fairness Evaluation in Presence of Biased Noisy Labels
原文: Risk assessment tools are widely used around the country to inform decision making within the criminal justice system. Recently, considerable attention has been devoted to the question of whether such tools may suffer from racial bias. In this type of assessment, a fundamental issue is that the training and evaluation of the model is based on a variable (arrest) that may represent a noisy version of an unobserved outcome of more central interest (offense). We propose a sensitivity analysis framework for assessing how assumptions on the noise across groups affect the predictive bias properties of the risk assessment model as a predictor of reoffense. Our experimental results on two real world criminal justice data sets demonstrate how even small biases in the observed labels may call into question the conclusions of an analysis based on the noisy outcome.
原文作者:Riccardo Fogliato, Max G'Sell, Alexandra Chouldechova
原文地址:https://arxiv.org/abs/2003.13808