Games for Fairness and Interpretability
As Machine Learning (ML) systems becomes more ubiquitous, ensuring the fair and equitable application of their underlying algorithms is of paramount importance. We argue that one way to achieve this is to proactively cultivate public pressure for ML developers to design and develop fairer algorithms...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As Machine Learning (ML) systems becomes more ubiquitous, ensuring the fair
and equitable application of their underlying algorithms is of paramount
importance. We argue that one way to achieve this is to proactively cultivate
public pressure for ML developers to design and develop fairer algorithms --
and that one way to cultivate public pressure while simultaneously serving the
interests and objectives of algorithm developers is through gameplay. We
propose a new class of games -- ``games for fairness and interpretability'' --
as one example of an incentive-aligned approach for producing fairer and more
equitable algorithms. Games for fairness and interpretability are
carefully-designed games with mass appeal. They are inherently engaging,
provide insights into how machine learning models work, and ultimately produce
data that helps researchers and developers improve their algorithms. We
highlight several possible examples of games, their implications for fairness
and interpretability, how their proliferation could creative positive public
pressure by narrowing the gap between algorithm developers and the general
public, and why the machine learning community could benefit from them. |
---|---|
DOI: | 10.48550/arxiv.2004.09551 |