Rethinking Fairness for Human-AI Collaboration
Existing approaches to algorithmic fairness aim to ensure equitable outcomes if human decision-makers comply perfectly with algorithmic decisions. However, perfect compliance with the algorithm is rarely a reality or even a desirable outcome in human-AI collaboration. Yet, recent studies have shown...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing approaches to algorithmic fairness aim to ensure equitable outcomes
if human decision-makers comply perfectly with algorithmic decisions. However,
perfect compliance with the algorithm is rarely a reality or even a desirable
outcome in human-AI collaboration. Yet, recent studies have shown that
selective compliance with fair algorithms can amplify discrimination relative
to the prior human policy. As a consequence, ensuring equitable outcomes
requires fundamentally different algorithmic design principles that ensure
robustness to the decision-maker's (a priori unknown) compliance pattern. We
define the notion of compliance-robustly fair algorithmic recommendations that
are guaranteed to (weakly) improve fairness in decisions, regardless of the
human's compliance pattern. We propose a simple optimization strategy to
identify the best performance-improving compliance-robustly fair policy.
However, we show that it may be infeasible to design algorithmic
recommendations that are simultaneously fair in isolation, compliance-robustly
fair, and more accurate than the human policy; thus, if our goal is to improve
the equity and accuracy of human-AI collaboration, it may not be desirable to
enforce traditional fairness constraints. |
---|---|
DOI: | 10.48550/arxiv.2310.03647 |