Models of fairness in federated learning
In many real-world situations, data is distributed across multiple self-interested agents. These agents can collaborate to build a machine learning model based on data from multiple agents, potentially reducing the error each experiences. However, sharing models in this way raises questions of fairn...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In many real-world situations, data is distributed across multiple
self-interested agents. These agents can collaborate to build a machine
learning model based on data from multiple agents, potentially reducing the
error each experiences. However, sharing models in this way raises questions of
fairness: to what extent can the error experienced by one agent be
significantly lower than the error experienced by another agent in the same
coalition? In this work, we consider two notions of fairness that each may be
appropriate in different circumstances: "egalitarian fairness" (which aims to
bound how dissimilar error rates can be) and "proportional fairness" (which
aims to reward players for contributing more data). We similarly consider two
common methods of model aggregation, one where a single model is created for
all agents (uniform), and one where an individualized model is created for each
agent. For egalitarian fairness, we obtain a tight multiplicative bound on how
widely error rates can diverge between agents collaborating (which holds for
both aggregation methods). For proportional fairness, we show that the
individualized aggregation method always gives a small player error that is
upper bounded by proportionality. For uniform aggregation, we show that this
upper bound is guaranteed for any individually rational coalition (where no
player wishes to leave to do local learning). |
---|---|
DOI: | 10.48550/arxiv.2112.00818 |