Bias detection by using name disparity tables across protected groups

As AI-based models take an increasingly central role in our lives, so does the concern for fairness. In recent years, mounting evidence reveals how vulnerable AI models are to bias and the challenges involved in detection and mitigation. Our contribution is three-fold. Firstly, we gather name dispar...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of responsible technology 2022-04, Vol.9, p.100020, Article 100020
Hauptverfasser: Mishraky, Elhanan, Arie, Aviv Ben, Horesh, Yair, Lador, Shir Meir
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:As AI-based models take an increasingly central role in our lives, so does the concern for fairness. In recent years, mounting evidence reveals how vulnerable AI models are to bias and the challenges involved in detection and mitigation. Our contribution is three-fold. Firstly, we gather name disparity tables across protected groups, allowing us to estimate sensitive attributes (gender, race). Using these estimates, we compute bias metrics given a classification model’s predictions. We leverage only names/zip codes; hence, our method is model and feature agnostic. Secondly, we offer an open-source Python package that produces a bias detection report based on our method. Finally, we demonstrate that names of older individuals are better predictors of race and gender and that double surnames are a reasonable predictor of gender. We tested our method on publicly available datasets (US Congress) and classifiers (COMPAS) and found it to be consistent with them.
ISSN:2666-6596
2666-6596
DOI:10.1016/j.jrt.2021.100020