Incentivizing Truthful Collaboration in Heterogeneous Federated Learning
It is well-known that Federated Learning (FL) is vulnerable to manipulated updates from clients. In this work we study the impact of data heterogeneity on clients' incentives to manipulate their updates. We formulate a game in which clients may upscale their gradient updates in order to ``steer...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It is well-known that Federated Learning (FL) is vulnerable to manipulated
updates from clients. In this work we study the impact of data heterogeneity on
clients' incentives to manipulate their updates. We formulate a game in which
clients may upscale their gradient updates in order to ``steer'' the server
model to their advantage. We develop a payment rule that disincentivizes
sending large gradient updates, and steers the clients towards truthfully
reporting their gradients. We also derive explicit bounds on the clients'
payments and the convergence rate of the global model, which allows us to study
the trade-off between heterogeneity, payments and convergence. |
---|---|
DOI: | 10.48550/arxiv.2412.00980 |