| | |
| | |
Stat |
Members: 3657 Articles: 2'599'751 Articles rated: 2609
09 October 2024 |
|
| | | |
|
Article overview
| |
|
Social Bias Meets Data Bias: The Impacts of Labeling and Measurement Errors on Fairness Criteria | Yiqiao Liao
; Parinaz Naghizadeh
; | Date: |
1 Jun 2022 | Abstract: | Although many fairness criteria have been proposed to ensure that machine
learning algorithms do not exhibit or amplify our existing social biases, these
algorithms are trained on datasets that can themselves be statistically biased.
In this paper, we investigate the robustness of a number of existing
(demographic) fairness criteria when the algorithm is trained on biased data.
We consider two forms of dataset bias: errors by prior decision makers in the
labeling process, and errors in measurement of the features of disadvantaged
individuals. We analytically show that some constraints (such as Demographic
Parity) can remain robust when facing certain statistical biases, while others
(such as Equalized Odds) are significantly violated if trained on biased data.
We also analyze the sensitivity of these criteria and the decision maker’s
utility to biases. We provide numerical experiments based on three real-world
datasets (the FICO, Adult, and German credit score datasets) supporting our
analytical findings. Our findings present an additional guideline for choosing
among existing fairness criteria, or for proposing new criteria, when available
datasets may be biased. | Source: | arXiv, 2206.00137 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|