Sustaining Fairness via Incremental Learning
Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased t...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning systems are often deployed for making critical decisions
like credit lending, hiring, etc. While making decisions, such systems often
encode the user's demographic information (like gender, age) in their
intermediate representations. This can lead to decisions that are biased
towards specific demographics. Prior work has focused on debiasing intermediate
representations to ensure fair decisions. However, these approaches fail to
remain fair with changes in the task or demographic distribution. To ensure
fairness in the wild, it is important for a system to adapt to such changes as
it accesses new data in an incremental fashion. In this work, we propose to
address this issue by introducing the problem of learning fair representations
in an incremental learning setting. To this end, we present Fairness-aware
Incremental Representation Learning (FaIRL), a representation learning system
that can sustain fairness while incrementally learning new tasks. FaIRL is able
to achieve fairness and learn new tasks by controlling the rate-distortion
function of the learned representations. Our empirical evaluations show that
FaIRL is able to make fair decisions while achieving high performance on the
target task, outperforming several baselines. |
---|---|
DOI: | 10.48550/arxiv.2208.12212 |