Learning Optimal Fair Policies
The Thirty-sixth International Conference on Machine Learning (ICML 2019) Systematic discriminatory biases present in our society influence the way data is collected and stored, the way variables are defined, and the way scientific findings are put into practice as policy. Automated decision procedu...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Thirty-sixth International Conference on Machine Learning
(ICML 2019) Systematic discriminatory biases present in our society influence the way
data is collected and stored, the way variables are defined, and the way
scientific findings are put into practice as policy. Automated decision
procedures and learning algorithms applied to such data may serve to perpetuate
existing injustice or unfairness in our society. In this paper, we consider how
to make optimal but fair decisions, which "break the cycle of injustice" by
correcting for the unfair dependence of both decisions and outcomes on
sensitive features (e.g., variables that correspond to gender, race,
disability, or other protected attributes). We use methods from causal
inference and constrained optimization to learn optimal policies in a way that
addresses multiple potential biases which afflict data analysis in sensitive
contexts, extending the approach of (Nabi and Shpitser 2018). Our proposal
comes equipped with the theoretical guarantee that the chosen fair policy will
induce a joint distribution for new instances that satisfies given fairness
constraints. We illustrate our approach with both synthetic data and real
criminal justice data. |
---|---|
DOI: | 10.48550/arxiv.1809.02244 |